00:00:00.001 Started by upstream project "autotest-per-patch" build number 132391 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.124 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.125 The recommended git tool is: git 00:00:00.125 using credential 00000000-0000-0000-0000-000000000002 00:00:00.131 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.178 Fetching changes from the remote Git repository 00:00:00.179 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.224 Using shallow fetch with depth 1 00:00:00.224 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.224 > git --version # timeout=10 00:00:00.260 > git --version # 'git version 2.39.2' 00:00:00.260 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.281 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.281 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.055 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.067 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.082 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.082 > git config core.sparsecheckout # timeout=10 00:00:07.093 > git read-tree -mu HEAD # timeout=10 00:00:07.110 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.131 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.131 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.218 [Pipeline] Start of Pipeline 00:00:07.234 [Pipeline] library 00:00:07.236 Loading library shm_lib@master 00:00:07.236 Library shm_lib@master is cached. Copying from home. 00:00:07.255 [Pipeline] node 00:00:07.267 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.268 [Pipeline] { 00:00:07.276 [Pipeline] catchError 00:00:07.277 [Pipeline] { 00:00:07.285 [Pipeline] wrap 00:00:07.291 [Pipeline] { 00:00:07.296 [Pipeline] stage 00:00:07.297 [Pipeline] { (Prologue) 00:00:07.310 [Pipeline] echo 00:00:07.311 Node: VM-host-WFP1 00:00:07.316 [Pipeline] cleanWs 00:00:07.324 [WS-CLEANUP] Deleting project workspace... 00:00:07.324 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.331 [WS-CLEANUP] done 00:00:07.561 [Pipeline] setCustomBuildProperty 00:00:07.649 [Pipeline] httpRequest 00:00:08.221 [Pipeline] echo 00:00:08.223 Sorcerer 10.211.164.20 is alive 00:00:08.233 [Pipeline] retry 00:00:08.235 [Pipeline] { 00:00:08.249 [Pipeline] httpRequest 00:00:08.254 HttpMethod: GET 00:00:08.255 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.255 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.275 Response Code: HTTP/1.1 200 OK 00:00:08.275 Success: Status code 200 is in the accepted range: 200,404 00:00:08.276 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.120 [Pipeline] } 00:00:13.135 [Pipeline] // retry 00:00:13.141 [Pipeline] sh 00:00:13.424 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.439 [Pipeline] httpRequest 00:00:14.150 [Pipeline] echo 00:00:14.151 Sorcerer 10.211.164.20 is alive 00:00:14.163 [Pipeline] retry 00:00:14.165 [Pipeline] { 00:00:14.177 [Pipeline] httpRequest 00:00:14.181 HttpMethod: GET 00:00:14.181 URL: http://10.211.164.20/packages/spdk_d2ebd983ec796cf3c9bd94783f62b7de1f7bf0f0.tar.gz 00:00:14.182 Sending request to url: http://10.211.164.20/packages/spdk_d2ebd983ec796cf3c9bd94783f62b7de1f7bf0f0.tar.gz 00:00:14.205 Response Code: HTTP/1.1 200 OK 00:00:14.206 Success: Status code 200 is in the accepted range: 200,404 00:00:14.206 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_d2ebd983ec796cf3c9bd94783f62b7de1f7bf0f0.tar.gz 00:05:44.576 [Pipeline] } 00:05:44.594 [Pipeline] // retry 00:05:44.602 [Pipeline] sh 00:05:44.887 + tar --no-same-owner -xf spdk_d2ebd983ec796cf3c9bd94783f62b7de1f7bf0f0.tar.gz 00:05:47.482 [Pipeline] sh 00:05:47.788 + git -C spdk log --oneline -n5 00:05:47.788 d2ebd983e bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:05:47.788 fa4f4fd15 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:05:47.788 b1f0bbae7 nvmf: Expose DIF type of namespace to host again 00:05:47.788 f9d18d578 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:05:47.788 a361eb5e2 nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:05:47.805 [Pipeline] writeFile 00:05:47.819 [Pipeline] sh 00:05:48.103 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:48.114 [Pipeline] sh 00:05:48.398 + cat autorun-spdk.conf 00:05:48.398 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:48.398 SPDK_TEST_NVME=1 00:05:48.398 SPDK_TEST_FTL=1 00:05:48.398 SPDK_TEST_ISAL=1 00:05:48.398 SPDK_RUN_ASAN=1 00:05:48.398 SPDK_RUN_UBSAN=1 00:05:48.398 SPDK_TEST_XNVME=1 00:05:48.398 SPDK_TEST_NVME_FDP=1 00:05:48.398 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:48.406 RUN_NIGHTLY=0 00:05:48.408 [Pipeline] } 00:05:48.421 [Pipeline] // stage 00:05:48.436 [Pipeline] stage 00:05:48.438 [Pipeline] { (Run VM) 00:05:48.451 [Pipeline] sh 00:05:48.736 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:48.736 + echo 'Start stage prepare_nvme.sh' 00:05:48.736 Start stage prepare_nvme.sh 00:05:48.736 + [[ -n 5 ]] 00:05:48.736 + disk_prefix=ex5 00:05:48.736 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:05:48.736 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:05:48.736 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:05:48.736 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:48.736 ++ SPDK_TEST_NVME=1 00:05:48.736 ++ SPDK_TEST_FTL=1 00:05:48.736 ++ SPDK_TEST_ISAL=1 00:05:48.736 ++ SPDK_RUN_ASAN=1 00:05:48.736 ++ SPDK_RUN_UBSAN=1 00:05:48.736 ++ SPDK_TEST_XNVME=1 00:05:48.736 ++ SPDK_TEST_NVME_FDP=1 00:05:48.736 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:48.736 ++ RUN_NIGHTLY=0 00:05:48.736 + cd /var/jenkins/workspace/nvme-vg-autotest 00:05:48.736 + nvme_files=() 00:05:48.736 + declare -A nvme_files 00:05:48.736 + backend_dir=/var/lib/libvirt/images/backends 00:05:48.736 + nvme_files['nvme.img']=5G 00:05:48.736 + nvme_files['nvme-cmb.img']=5G 00:05:48.736 + nvme_files['nvme-multi0.img']=4G 00:05:48.736 + nvme_files['nvme-multi1.img']=4G 00:05:48.736 + nvme_files['nvme-multi2.img']=4G 00:05:48.736 + nvme_files['nvme-openstack.img']=8G 00:05:48.736 + nvme_files['nvme-zns.img']=5G 00:05:48.736 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:48.736 + (( SPDK_TEST_FTL == 1 )) 00:05:48.736 + nvme_files["nvme-ftl.img"]=6G 00:05:48.736 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:48.736 + nvme_files["nvme-fdp.img"]=1G 00:05:48.736 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:48.736 + for nvme in "${!nvme_files[@]}" 00:05:48.736 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:05:48.736 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:48.736 + for nvme in "${!nvme_files[@]}" 00:05:48.736 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:05:48.736 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:05:48.736 + for nvme in "${!nvme_files[@]}" 00:05:48.736 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:05:48.736 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:48.995 + for nvme in "${!nvme_files[@]}" 00:05:48.995 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:05:48.995 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:48.995 + for nvme in "${!nvme_files[@]}" 00:05:48.995 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:05:48.995 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:48.995 + for nvme in "${!nvme_files[@]}" 00:05:48.995 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:05:48.995 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:48.995 + for nvme in "${!nvme_files[@]}" 00:05:48.995 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:05:48.995 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:48.995 + for nvme in "${!nvme_files[@]}" 00:05:48.995 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:05:48.995 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:05:49.255 + for nvme in "${!nvme_files[@]}" 00:05:49.255 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:05:49.255 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:49.255 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:05:49.255 + echo 'End stage prepare_nvme.sh' 00:05:49.255 End stage prepare_nvme.sh 00:05:49.267 [Pipeline] sh 00:05:49.548 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:49.548 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:05:49.548 00:05:49.548 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:05:49.548 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:05:49.548 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:05:49.548 HELP=0 00:05:49.548 DRY_RUN=0 00:05:49.548 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:05:49.548 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:05:49.548 NVME_AUTO_CREATE=0 00:05:49.548 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:05:49.548 NVME_CMB=,,,, 00:05:49.548 NVME_PMR=,,,, 00:05:49.548 NVME_ZNS=,,,, 00:05:49.548 NVME_MS=true,,,, 00:05:49.548 NVME_FDP=,,,on, 00:05:49.548 SPDK_VAGRANT_DISTRO=fedora39 00:05:49.548 SPDK_VAGRANT_VMCPU=10 00:05:49.548 SPDK_VAGRANT_VMRAM=12288 00:05:49.548 SPDK_VAGRANT_PROVIDER=libvirt 00:05:49.548 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:05:49.548 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:49.548 SPDK_OPENSTACK_NETWORK=0 00:05:49.548 VAGRANT_PACKAGE_BOX=0 00:05:49.548 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:05:49.548 FORCE_DISTRO=true 00:05:49.548 VAGRANT_BOX_VERSION= 00:05:49.548 EXTRA_VAGRANTFILES= 00:05:49.548 NIC_MODEL=e1000 00:05:49.548 00:05:49.548 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:05:49.548 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:05:52.081 Bringing machine 'default' up with 'libvirt' provider... 00:05:53.016 ==> default: Creating image (snapshot of base box volume). 00:05:53.274 ==> default: Creating domain with the following settings... 00:05:53.274 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732109104_c5289d8c7a66ffed382c 00:05:53.274 ==> default: -- Domain type: kvm 00:05:53.275 ==> default: -- Cpus: 10 00:05:53.275 ==> default: -- Feature: acpi 00:05:53.275 ==> default: -- Feature: apic 00:05:53.275 ==> default: -- Feature: pae 00:05:53.275 ==> default: -- Memory: 12288M 00:05:53.275 ==> default: -- Memory Backing: hugepages: 00:05:53.275 ==> default: -- Management MAC: 00:05:53.275 ==> default: -- Loader: 00:05:53.275 ==> default: -- Nvram: 00:05:53.275 ==> default: -- Base box: spdk/fedora39 00:05:53.275 ==> default: -- Storage pool: default 00:05:53.275 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732109104_c5289d8c7a66ffed382c.img (20G) 00:05:53.275 ==> default: -- Volume Cache: default 00:05:53.275 ==> default: -- Kernel: 00:05:53.275 ==> default: -- Initrd: 00:05:53.275 ==> default: -- Graphics Type: vnc 00:05:53.275 ==> default: -- Graphics Port: -1 00:05:53.275 ==> default: -- Graphics IP: 127.0.0.1 00:05:53.275 ==> default: -- Graphics Password: Not defined 00:05:53.275 ==> default: -- Video Type: cirrus 00:05:53.275 ==> default: -- Video VRAM: 9216 00:05:53.275 ==> default: -- Sound Type: 00:05:53.275 ==> default: -- Keymap: en-us 00:05:53.275 ==> default: -- TPM Path: 00:05:53.275 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:53.275 ==> default: -- Command line args: 00:05:53.275 ==> default: -> value=-device, 00:05:53.275 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:05:53.275 ==> default: -> value=-drive, 00:05:53.275 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:05:53.275 ==> default: -> value=-device, 00:05:53.275 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:05:53.275 ==> default: -> value=-device, 00:05:53.275 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:05:53.275 ==> default: -> value=-drive, 00:05:53.275 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:05:53.275 ==> default: -> value=-device, 00:05:53.275 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:53.275 ==> default: -> value=-device, 00:05:53.275 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:05:53.275 ==> default: -> value=-drive, 00:05:53.275 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:05:53.275 ==> default: -> value=-device, 00:05:53.275 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:53.275 ==> default: -> value=-drive, 00:05:53.275 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:05:53.275 ==> default: -> value=-device, 00:05:53.275 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:53.275 ==> default: -> value=-drive, 00:05:53.275 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:05:53.275 ==> default: -> value=-device, 00:05:53.275 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:53.275 ==> default: -> value=-device, 00:05:53.275 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:05:53.275 ==> default: -> value=-device, 00:05:53.275 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:05:53.275 ==> default: -> value=-drive, 00:05:53.275 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:05:53.275 ==> default: -> value=-device, 00:05:53.275 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:53.844 ==> default: Creating shared folders metadata... 00:05:53.844 ==> default: Starting domain. 00:05:56.380 ==> default: Waiting for domain to get an IP address... 00:06:11.264 ==> default: Waiting for SSH to become available... 00:06:13.171 ==> default: Configuring and enabling network interfaces... 00:06:18.439 default: SSH address: 192.168.121.59:22 00:06:18.439 default: SSH username: vagrant 00:06:18.439 default: SSH auth method: private key 00:06:21.721 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:31.860 ==> default: Mounting SSHFS shared folder... 00:06:32.800 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:06:32.800 ==> default: Checking Mount.. 00:06:34.707 ==> default: Folder Successfully Mounted! 00:06:34.707 ==> default: Running provisioner: file... 00:06:35.644 default: ~/.gitconfig => .gitconfig 00:06:36.211 00:06:36.211 SUCCESS! 00:06:36.211 00:06:36.211 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:06:36.211 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:06:36.211 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:06:36.211 00:06:36.219 [Pipeline] } 00:06:36.232 [Pipeline] // stage 00:06:36.240 [Pipeline] dir 00:06:36.241 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:06:36.242 [Pipeline] { 00:06:36.252 [Pipeline] catchError 00:06:36.253 [Pipeline] { 00:06:36.260 [Pipeline] sh 00:06:36.538 + vagrant ssh-config --host vagrant 00:06:36.538 + sed -ne /^Host/,$p 00:06:36.538 + tee ssh_conf 00:06:39.858 Host vagrant 00:06:39.858 HostName 192.168.121.59 00:06:39.858 User vagrant 00:06:39.858 Port 22 00:06:39.858 UserKnownHostsFile /dev/null 00:06:39.858 StrictHostKeyChecking no 00:06:39.858 PasswordAuthentication no 00:06:39.858 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:06:39.858 IdentitiesOnly yes 00:06:39.858 LogLevel FATAL 00:06:39.858 ForwardAgent yes 00:06:39.858 ForwardX11 yes 00:06:39.858 00:06:39.873 [Pipeline] withEnv 00:06:39.876 [Pipeline] { 00:06:39.890 [Pipeline] sh 00:06:40.172 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:06:40.172 source /etc/os-release 00:06:40.172 [[ -e /image.version ]] && img=$(< /image.version) 00:06:40.172 # Minimal, systemd-like check. 00:06:40.172 if [[ -e /.dockerenv ]]; then 00:06:40.172 # Clear garbage from the node's name: 00:06:40.172 # agt-er_autotest_547-896 -> autotest_547-896 00:06:40.172 # $HOSTNAME is the actual container id 00:06:40.172 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:40.172 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:06:40.172 # We can assume this is a mount from a host where container is running, 00:06:40.172 # so fetch its hostname to easily identify the target swarm worker. 00:06:40.172 container="$(< /etc/hostname) ($agent)" 00:06:40.172 else 00:06:40.172 # Fallback 00:06:40.172 container=$agent 00:06:40.172 fi 00:06:40.172 fi 00:06:40.172 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:40.172 00:06:40.441 [Pipeline] } 00:06:40.457 [Pipeline] // withEnv 00:06:40.465 [Pipeline] setCustomBuildProperty 00:06:40.478 [Pipeline] stage 00:06:40.480 [Pipeline] { (Tests) 00:06:40.496 [Pipeline] sh 00:06:40.774 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:41.072 [Pipeline] sh 00:06:41.352 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:06:41.626 [Pipeline] timeout 00:06:41.626 Timeout set to expire in 50 min 00:06:41.628 [Pipeline] { 00:06:41.643 [Pipeline] sh 00:06:41.928 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:06:42.495 HEAD is now at d2ebd983e bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:06:42.506 [Pipeline] sh 00:06:42.810 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:06:43.080 [Pipeline] sh 00:06:43.356 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:43.630 [Pipeline] sh 00:06:43.910 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:06:44.167 ++ readlink -f spdk_repo 00:06:44.167 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:44.167 + [[ -n /home/vagrant/spdk_repo ]] 00:06:44.167 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:44.167 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:44.168 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:44.168 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:44.168 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:44.168 + [[ nvme-vg-autotest == pkgdep-* ]] 00:06:44.168 + cd /home/vagrant/spdk_repo 00:06:44.168 + source /etc/os-release 00:06:44.168 ++ NAME='Fedora Linux' 00:06:44.168 ++ VERSION='39 (Cloud Edition)' 00:06:44.168 ++ ID=fedora 00:06:44.168 ++ VERSION_ID=39 00:06:44.168 ++ VERSION_CODENAME= 00:06:44.168 ++ PLATFORM_ID=platform:f39 00:06:44.168 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:44.168 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:44.168 ++ LOGO=fedora-logo-icon 00:06:44.168 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:44.168 ++ HOME_URL=https://fedoraproject.org/ 00:06:44.168 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:44.168 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:44.168 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:44.168 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:44.168 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:44.168 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:44.168 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:44.168 ++ SUPPORT_END=2024-11-12 00:06:44.168 ++ VARIANT='Cloud Edition' 00:06:44.168 ++ VARIANT_ID=cloud 00:06:44.168 + uname -a 00:06:44.168 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:44.168 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:44.733 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:44.991 Hugepages 00:06:44.991 node hugesize free / total 00:06:44.991 node0 1048576kB 0 / 0 00:06:44.991 node0 2048kB 0 / 0 00:06:44.991 00:06:44.992 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:44.992 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:44.992 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:44.992 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:45.250 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:45.250 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:45.250 + rm -f /tmp/spdk-ld-path 00:06:45.250 + source autorun-spdk.conf 00:06:45.250 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:45.250 ++ SPDK_TEST_NVME=1 00:06:45.250 ++ SPDK_TEST_FTL=1 00:06:45.250 ++ SPDK_TEST_ISAL=1 00:06:45.250 ++ SPDK_RUN_ASAN=1 00:06:45.250 ++ SPDK_RUN_UBSAN=1 00:06:45.250 ++ SPDK_TEST_XNVME=1 00:06:45.250 ++ SPDK_TEST_NVME_FDP=1 00:06:45.250 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:45.250 ++ RUN_NIGHTLY=0 00:06:45.250 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:45.250 + [[ -n '' ]] 00:06:45.250 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:45.250 + for M in /var/spdk/build-*-manifest.txt 00:06:45.250 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:45.250 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:45.250 + for M in /var/spdk/build-*-manifest.txt 00:06:45.250 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:45.250 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:45.250 + for M in /var/spdk/build-*-manifest.txt 00:06:45.250 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:45.250 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:45.250 ++ uname 00:06:45.250 + [[ Linux == \L\i\n\u\x ]] 00:06:45.250 + sudo dmesg -T 00:06:45.250 + sudo dmesg --clear 00:06:45.250 + dmesg_pid=5244 00:06:45.250 + sudo dmesg -Tw 00:06:45.250 + [[ Fedora Linux == FreeBSD ]] 00:06:45.250 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:45.250 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:45.250 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:45.250 + [[ -x /usr/src/fio-static/fio ]] 00:06:45.250 + export FIO_BIN=/usr/src/fio-static/fio 00:06:45.250 + FIO_BIN=/usr/src/fio-static/fio 00:06:45.250 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:45.250 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:45.250 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:45.250 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:45.250 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:45.250 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:45.250 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:45.250 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:45.250 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:45.509 13:25:57 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:45.509 13:25:57 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:45.509 13:25:57 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:45.509 13:25:57 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:06:45.509 13:25:57 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:06:45.509 13:25:57 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:06:45.509 13:25:57 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:06:45.509 13:25:57 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:06:45.509 13:25:57 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:06:45.509 13:25:57 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:06:45.509 13:25:57 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:45.509 13:25:57 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:06:45.509 13:25:57 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:45.509 13:25:57 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:45.509 13:25:57 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:45.509 13:25:57 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:45.509 13:25:57 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:45.509 13:25:57 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:45.509 13:25:57 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.509 13:25:57 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.509 13:25:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.509 13:25:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.509 13:25:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.509 13:25:57 -- paths/export.sh@5 -- $ export PATH 00:06:45.509 13:25:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.509 13:25:57 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:45.509 13:25:57 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:45.509 13:25:57 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732109157.XXXXXX 00:06:45.509 13:25:57 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732109157.YMSLbd 00:06:45.509 13:25:57 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:45.509 13:25:57 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:45.509 13:25:57 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:45.509 13:25:57 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:45.509 13:25:57 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:45.509 13:25:57 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:45.509 13:25:57 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:45.509 13:25:57 -- common/autotest_common.sh@10 -- $ set +x 00:06:45.509 13:25:57 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:06:45.509 13:25:57 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:45.509 13:25:57 -- pm/common@17 -- $ local monitor 00:06:45.509 13:25:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:45.509 13:25:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:45.509 13:25:57 -- pm/common@25 -- $ sleep 1 00:06:45.509 13:25:57 -- pm/common@21 -- $ date +%s 00:06:45.509 13:25:57 -- pm/common@21 -- $ date +%s 00:06:45.509 13:25:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732109157 00:06:45.509 13:25:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732109157 00:06:45.768 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732109157_collect-vmstat.pm.log 00:06:45.768 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732109157_collect-cpu-load.pm.log 00:06:46.704 13:25:58 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:06:46.704 13:25:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:46.704 13:25:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:46.704 13:25:58 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:46.704 13:25:58 -- spdk/autobuild.sh@16 -- $ date -u 00:06:46.704 Wed Nov 20 01:25:58 PM UTC 2024 00:06:46.704 13:25:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:46.704 v25.01-pre-252-gd2ebd983e 00:06:46.704 13:25:58 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:06:46.704 13:25:58 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:06:46.704 13:25:58 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:46.704 13:25:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:46.704 13:25:58 -- common/autotest_common.sh@10 -- $ set +x 00:06:46.704 ************************************ 00:06:46.704 START TEST asan 00:06:46.704 ************************************ 00:06:46.704 using asan 00:06:46.704 13:25:58 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:06:46.704 00:06:46.704 real 0m0.000s 00:06:46.704 user 0m0.000s 00:06:46.704 sys 0m0.000s 00:06:46.704 13:25:58 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:46.704 13:25:58 asan -- common/autotest_common.sh@10 -- $ set +x 00:06:46.704 ************************************ 00:06:46.704 END TEST asan 00:06:46.704 ************************************ 00:06:46.704 13:25:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:46.704 13:25:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:46.704 13:25:58 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:46.704 13:25:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:46.704 13:25:58 -- common/autotest_common.sh@10 -- $ set +x 00:06:46.704 ************************************ 00:06:46.704 START TEST ubsan 00:06:46.704 ************************************ 00:06:46.704 using ubsan 00:06:46.704 13:25:58 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:06:46.704 00:06:46.704 real 0m0.000s 00:06:46.704 user 0m0.000s 00:06:46.704 sys 0m0.000s 00:06:46.704 13:25:58 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:46.704 13:25:58 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:46.704 ************************************ 00:06:46.704 END TEST ubsan 00:06:46.704 ************************************ 00:06:46.704 13:25:58 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:46.704 13:25:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:46.704 13:25:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:46.704 13:25:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:46.704 13:25:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:46.704 13:25:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:46.704 13:25:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:46.704 13:25:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:46.704 13:25:58 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:06:46.962 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:46.962 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:47.528 Using 'verbs' RDMA provider 00:07:07.040 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:07:21.948 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:07:21.948 Creating mk/config.mk...done. 00:07:21.948 Creating mk/cc.flags.mk...done. 00:07:21.948 Type 'make' to build. 00:07:21.948 13:26:33 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:07:21.948 13:26:33 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:21.948 13:26:33 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:21.948 13:26:33 -- common/autotest_common.sh@10 -- $ set +x 00:07:21.948 ************************************ 00:07:21.948 START TEST make 00:07:21.948 ************************************ 00:07:21.948 13:26:33 make -- common/autotest_common.sh@1129 -- $ make -j10 00:07:21.948 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:07:21.948 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:07:21.948 meson setup builddir \ 00:07:21.948 -Dwith-libaio=enabled \ 00:07:21.948 -Dwith-liburing=enabled \ 00:07:21.948 -Dwith-libvfn=disabled \ 00:07:21.948 -Dwith-spdk=disabled \ 00:07:21.948 -Dexamples=false \ 00:07:21.948 -Dtests=false \ 00:07:21.948 -Dtools=false && \ 00:07:21.948 meson compile -C builddir && \ 00:07:21.948 cd -) 00:07:21.948 make[1]: Nothing to be done for 'all'. 00:07:24.481 The Meson build system 00:07:24.481 Version: 1.5.0 00:07:24.481 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:07:24.481 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:07:24.481 Build type: native build 00:07:24.481 Project name: xnvme 00:07:24.481 Project version: 0.7.5 00:07:24.481 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:24.481 C linker for the host machine: cc ld.bfd 2.40-14 00:07:24.481 Host machine cpu family: x86_64 00:07:24.481 Host machine cpu: x86_64 00:07:24.481 Message: host_machine.system: linux 00:07:24.481 Compiler for C supports arguments -Wno-missing-braces: YES 00:07:24.481 Compiler for C supports arguments -Wno-cast-function-type: YES 00:07:24.481 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:07:24.481 Run-time dependency threads found: YES 00:07:24.481 Has header "setupapi.h" : NO 00:07:24.481 Has header "linux/blkzoned.h" : YES 00:07:24.481 Has header "linux/blkzoned.h" : YES (cached) 00:07:24.481 Has header "libaio.h" : YES 00:07:24.481 Library aio found: YES 00:07:24.481 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:24.481 Run-time dependency liburing found: YES 2.2 00:07:24.481 Dependency libvfn skipped: feature with-libvfn disabled 00:07:24.481 Found CMake: /usr/bin/cmake (3.27.7) 00:07:24.481 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:07:24.481 Subproject spdk : skipped: feature with-spdk disabled 00:07:24.481 Run-time dependency appleframeworks found: NO (tried framework) 00:07:24.481 Run-time dependency appleframeworks found: NO (tried framework) 00:07:24.481 Library rt found: YES 00:07:24.481 Checking for function "clock_gettime" with dependency -lrt: YES 00:07:24.481 Configuring xnvme_config.h using configuration 00:07:24.481 Configuring xnvme.spec using configuration 00:07:24.481 Run-time dependency bash-completion found: YES 2.11 00:07:24.481 Message: Bash-completions: /usr/share/bash-completion/completions 00:07:24.481 Program cp found: YES (/usr/bin/cp) 00:07:24.481 Build targets in project: 3 00:07:24.481 00:07:24.481 xnvme 0.7.5 00:07:24.481 00:07:24.481 Subprojects 00:07:24.481 spdk : NO Feature 'with-spdk' disabled 00:07:24.481 00:07:24.481 User defined options 00:07:24.481 examples : false 00:07:24.481 tests : false 00:07:24.481 tools : false 00:07:24.481 with-libaio : enabled 00:07:24.481 with-liburing: enabled 00:07:24.481 with-libvfn : disabled 00:07:24.481 with-spdk : disabled 00:07:24.481 00:07:24.481 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:24.741 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:07:24.741 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:07:24.741 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:07:24.741 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:07:24.741 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:07:24.741 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:07:24.741 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:07:24.741 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:07:24.741 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:07:24.741 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:07:24.741 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:07:24.741 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:07:24.741 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:07:24.741 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:07:24.741 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:07:24.741 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:07:25.001 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:07:25.001 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:07:25.001 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:07:25.001 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:07:25.001 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:07:25.001 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:07:25.001 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:07:25.001 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:07:25.001 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:07:25.001 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:07:25.001 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:07:25.001 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:07:25.001 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:07:25.001 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:07:25.001 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:07:25.001 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:07:25.001 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:07:25.001 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:07:25.001 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:07:25.001 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:07:25.001 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:07:25.001 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:07:25.001 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:07:25.001 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:07:25.001 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:07:25.001 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:07:25.001 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:07:25.001 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:07:25.001 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:07:25.001 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:07:25.260 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:07:25.260 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:07:25.260 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:07:25.260 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:07:25.260 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:07:25.260 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:07:25.260 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:07:25.260 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:07:25.260 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:07:25.260 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:07:25.260 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:07:25.260 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:07:25.260 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:07:25.260 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:07:25.260 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:07:25.260 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:07:25.260 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:07:25.260 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:07:25.260 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:07:25.527 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:07:25.527 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:07:25.527 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:07:25.527 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:07:25.527 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:07:25.527 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:07:25.527 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:07:25.527 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:07:25.527 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:07:25.786 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:07:26.044 [75/76] Linking static target lib/libxnvme.a 00:07:26.044 [76/76] Linking target lib/libxnvme.so.0.7.5 00:07:26.044 INFO: autodetecting backend as ninja 00:07:26.044 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:07:26.044 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:07:34.171 The Meson build system 00:07:34.171 Version: 1.5.0 00:07:34.171 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:07:34.171 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:07:34.171 Build type: native build 00:07:34.171 Program cat found: YES (/usr/bin/cat) 00:07:34.171 Project name: DPDK 00:07:34.171 Project version: 24.03.0 00:07:34.171 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:34.171 C linker for the host machine: cc ld.bfd 2.40-14 00:07:34.171 Host machine cpu family: x86_64 00:07:34.171 Host machine cpu: x86_64 00:07:34.171 Message: ## Building in Developer Mode ## 00:07:34.171 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:34.171 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:07:34.171 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:34.171 Program python3 found: YES (/usr/bin/python3) 00:07:34.171 Program cat found: YES (/usr/bin/cat) 00:07:34.171 Compiler for C supports arguments -march=native: YES 00:07:34.171 Checking for size of "void *" : 8 00:07:34.171 Checking for size of "void *" : 8 (cached) 00:07:34.171 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:34.171 Library m found: YES 00:07:34.171 Library numa found: YES 00:07:34.171 Has header "numaif.h" : YES 00:07:34.171 Library fdt found: NO 00:07:34.171 Library execinfo found: NO 00:07:34.171 Has header "execinfo.h" : YES 00:07:34.171 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:34.171 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:34.171 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:34.171 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:34.171 Run-time dependency openssl found: YES 3.1.1 00:07:34.171 Run-time dependency libpcap found: YES 1.10.4 00:07:34.171 Has header "pcap.h" with dependency libpcap: YES 00:07:34.171 Compiler for C supports arguments -Wcast-qual: YES 00:07:34.171 Compiler for C supports arguments -Wdeprecated: YES 00:07:34.171 Compiler for C supports arguments -Wformat: YES 00:07:34.171 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:34.171 Compiler for C supports arguments -Wformat-security: NO 00:07:34.171 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:34.171 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:34.171 Compiler for C supports arguments -Wnested-externs: YES 00:07:34.171 Compiler for C supports arguments -Wold-style-definition: YES 00:07:34.171 Compiler for C supports arguments -Wpointer-arith: YES 00:07:34.171 Compiler for C supports arguments -Wsign-compare: YES 00:07:34.171 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:34.171 Compiler for C supports arguments -Wundef: YES 00:07:34.171 Compiler for C supports arguments -Wwrite-strings: YES 00:07:34.171 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:34.171 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:34.171 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:34.171 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:34.171 Program objdump found: YES (/usr/bin/objdump) 00:07:34.171 Compiler for C supports arguments -mavx512f: YES 00:07:34.171 Checking if "AVX512 checking" compiles: YES 00:07:34.171 Fetching value of define "__SSE4_2__" : 1 00:07:34.171 Fetching value of define "__AES__" : 1 00:07:34.171 Fetching value of define "__AVX__" : 1 00:07:34.171 Fetching value of define "__AVX2__" : 1 00:07:34.171 Fetching value of define "__AVX512BW__" : 1 00:07:34.171 Fetching value of define "__AVX512CD__" : 1 00:07:34.171 Fetching value of define "__AVX512DQ__" : 1 00:07:34.171 Fetching value of define "__AVX512F__" : 1 00:07:34.171 Fetching value of define "__AVX512VL__" : 1 00:07:34.171 Fetching value of define "__PCLMUL__" : 1 00:07:34.171 Fetching value of define "__RDRND__" : 1 00:07:34.171 Fetching value of define "__RDSEED__" : 1 00:07:34.171 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:34.171 Fetching value of define "__znver1__" : (undefined) 00:07:34.171 Fetching value of define "__znver2__" : (undefined) 00:07:34.171 Fetching value of define "__znver3__" : (undefined) 00:07:34.171 Fetching value of define "__znver4__" : (undefined) 00:07:34.171 Library asan found: YES 00:07:34.171 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:34.171 Message: lib/log: Defining dependency "log" 00:07:34.171 Message: lib/kvargs: Defining dependency "kvargs" 00:07:34.171 Message: lib/telemetry: Defining dependency "telemetry" 00:07:34.171 Library rt found: YES 00:07:34.171 Checking for function "getentropy" : NO 00:07:34.171 Message: lib/eal: Defining dependency "eal" 00:07:34.171 Message: lib/ring: Defining dependency "ring" 00:07:34.171 Message: lib/rcu: Defining dependency "rcu" 00:07:34.171 Message: lib/mempool: Defining dependency "mempool" 00:07:34.171 Message: lib/mbuf: Defining dependency "mbuf" 00:07:34.171 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:34.171 Fetching value of define "__AVX512F__" : 1 (cached) 00:07:34.171 Fetching value of define "__AVX512BW__" : 1 (cached) 00:07:34.171 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:07:34.171 Fetching value of define "__AVX512VL__" : 1 (cached) 00:07:34.171 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:07:34.171 Compiler for C supports arguments -mpclmul: YES 00:07:34.171 Compiler for C supports arguments -maes: YES 00:07:34.171 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:34.171 Compiler for C supports arguments -mavx512bw: YES 00:07:34.172 Compiler for C supports arguments -mavx512dq: YES 00:07:34.172 Compiler for C supports arguments -mavx512vl: YES 00:07:34.172 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:34.172 Compiler for C supports arguments -mavx2: YES 00:07:34.172 Compiler for C supports arguments -mavx: YES 00:07:34.172 Message: lib/net: Defining dependency "net" 00:07:34.172 Message: lib/meter: Defining dependency "meter" 00:07:34.172 Message: lib/ethdev: Defining dependency "ethdev" 00:07:34.172 Message: lib/pci: Defining dependency "pci" 00:07:34.172 Message: lib/cmdline: Defining dependency "cmdline" 00:07:34.172 Message: lib/hash: Defining dependency "hash" 00:07:34.172 Message: lib/timer: Defining dependency "timer" 00:07:34.172 Message: lib/compressdev: Defining dependency "compressdev" 00:07:34.172 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:34.172 Message: lib/dmadev: Defining dependency "dmadev" 00:07:34.172 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:34.172 Message: lib/power: Defining dependency "power" 00:07:34.172 Message: lib/reorder: Defining dependency "reorder" 00:07:34.172 Message: lib/security: Defining dependency "security" 00:07:34.172 Has header "linux/userfaultfd.h" : YES 00:07:34.172 Has header "linux/vduse.h" : YES 00:07:34.172 Message: lib/vhost: Defining dependency "vhost" 00:07:34.172 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:34.172 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:34.172 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:34.172 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:34.172 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:34.172 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:34.172 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:34.172 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:34.172 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:34.172 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:34.172 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:34.172 Configuring doxy-api-html.conf using configuration 00:07:34.172 Configuring doxy-api-man.conf using configuration 00:07:34.172 Program mandb found: YES (/usr/bin/mandb) 00:07:34.172 Program sphinx-build found: NO 00:07:34.172 Configuring rte_build_config.h using configuration 00:07:34.172 Message: 00:07:34.172 ================= 00:07:34.172 Applications Enabled 00:07:34.172 ================= 00:07:34.172 00:07:34.172 apps: 00:07:34.172 00:07:34.172 00:07:34.172 Message: 00:07:34.172 ================= 00:07:34.172 Libraries Enabled 00:07:34.172 ================= 00:07:34.172 00:07:34.172 libs: 00:07:34.172 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:34.172 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:34.172 cryptodev, dmadev, power, reorder, security, vhost, 00:07:34.172 00:07:34.172 Message: 00:07:34.172 =============== 00:07:34.172 Drivers Enabled 00:07:34.172 =============== 00:07:34.172 00:07:34.172 common: 00:07:34.172 00:07:34.172 bus: 00:07:34.172 pci, vdev, 00:07:34.172 mempool: 00:07:34.172 ring, 00:07:34.172 dma: 00:07:34.172 00:07:34.172 net: 00:07:34.172 00:07:34.172 crypto: 00:07:34.172 00:07:34.172 compress: 00:07:34.172 00:07:34.172 vdpa: 00:07:34.172 00:07:34.172 00:07:34.172 Message: 00:07:34.172 ================= 00:07:34.172 Content Skipped 00:07:34.172 ================= 00:07:34.172 00:07:34.172 apps: 00:07:34.172 dumpcap: explicitly disabled via build config 00:07:34.172 graph: explicitly disabled via build config 00:07:34.172 pdump: explicitly disabled via build config 00:07:34.172 proc-info: explicitly disabled via build config 00:07:34.172 test-acl: explicitly disabled via build config 00:07:34.172 test-bbdev: explicitly disabled via build config 00:07:34.172 test-cmdline: explicitly disabled via build config 00:07:34.172 test-compress-perf: explicitly disabled via build config 00:07:34.172 test-crypto-perf: explicitly disabled via build config 00:07:34.172 test-dma-perf: explicitly disabled via build config 00:07:34.172 test-eventdev: explicitly disabled via build config 00:07:34.172 test-fib: explicitly disabled via build config 00:07:34.172 test-flow-perf: explicitly disabled via build config 00:07:34.172 test-gpudev: explicitly disabled via build config 00:07:34.172 test-mldev: explicitly disabled via build config 00:07:34.172 test-pipeline: explicitly disabled via build config 00:07:34.172 test-pmd: explicitly disabled via build config 00:07:34.172 test-regex: explicitly disabled via build config 00:07:34.172 test-sad: explicitly disabled via build config 00:07:34.172 test-security-perf: explicitly disabled via build config 00:07:34.172 00:07:34.172 libs: 00:07:34.172 argparse: explicitly disabled via build config 00:07:34.172 metrics: explicitly disabled via build config 00:07:34.172 acl: explicitly disabled via build config 00:07:34.172 bbdev: explicitly disabled via build config 00:07:34.172 bitratestats: explicitly disabled via build config 00:07:34.172 bpf: explicitly disabled via build config 00:07:34.172 cfgfile: explicitly disabled via build config 00:07:34.172 distributor: explicitly disabled via build config 00:07:34.172 efd: explicitly disabled via build config 00:07:34.172 eventdev: explicitly disabled via build config 00:07:34.172 dispatcher: explicitly disabled via build config 00:07:34.172 gpudev: explicitly disabled via build config 00:07:34.172 gro: explicitly disabled via build config 00:07:34.172 gso: explicitly disabled via build config 00:07:34.172 ip_frag: explicitly disabled via build config 00:07:34.172 jobstats: explicitly disabled via build config 00:07:34.172 latencystats: explicitly disabled via build config 00:07:34.172 lpm: explicitly disabled via build config 00:07:34.172 member: explicitly disabled via build config 00:07:34.172 pcapng: explicitly disabled via build config 00:07:34.172 rawdev: explicitly disabled via build config 00:07:34.172 regexdev: explicitly disabled via build config 00:07:34.172 mldev: explicitly disabled via build config 00:07:34.172 rib: explicitly disabled via build config 00:07:34.172 sched: explicitly disabled via build config 00:07:34.172 stack: explicitly disabled via build config 00:07:34.172 ipsec: explicitly disabled via build config 00:07:34.172 pdcp: explicitly disabled via build config 00:07:34.172 fib: explicitly disabled via build config 00:07:34.172 port: explicitly disabled via build config 00:07:34.172 pdump: explicitly disabled via build config 00:07:34.172 table: explicitly disabled via build config 00:07:34.172 pipeline: explicitly disabled via build config 00:07:34.172 graph: explicitly disabled via build config 00:07:34.172 node: explicitly disabled via build config 00:07:34.172 00:07:34.172 drivers: 00:07:34.172 common/cpt: not in enabled drivers build config 00:07:34.172 common/dpaax: not in enabled drivers build config 00:07:34.172 common/iavf: not in enabled drivers build config 00:07:34.172 common/idpf: not in enabled drivers build config 00:07:34.172 common/ionic: not in enabled drivers build config 00:07:34.172 common/mvep: not in enabled drivers build config 00:07:34.172 common/octeontx: not in enabled drivers build config 00:07:34.172 bus/auxiliary: not in enabled drivers build config 00:07:34.172 bus/cdx: not in enabled drivers build config 00:07:34.172 bus/dpaa: not in enabled drivers build config 00:07:34.172 bus/fslmc: not in enabled drivers build config 00:07:34.172 bus/ifpga: not in enabled drivers build config 00:07:34.172 bus/platform: not in enabled drivers build config 00:07:34.172 bus/uacce: not in enabled drivers build config 00:07:34.172 bus/vmbus: not in enabled drivers build config 00:07:34.172 common/cnxk: not in enabled drivers build config 00:07:34.172 common/mlx5: not in enabled drivers build config 00:07:34.172 common/nfp: not in enabled drivers build config 00:07:34.172 common/nitrox: not in enabled drivers build config 00:07:34.172 common/qat: not in enabled drivers build config 00:07:34.172 common/sfc_efx: not in enabled drivers build config 00:07:34.172 mempool/bucket: not in enabled drivers build config 00:07:34.172 mempool/cnxk: not in enabled drivers build config 00:07:34.172 mempool/dpaa: not in enabled drivers build config 00:07:34.172 mempool/dpaa2: not in enabled drivers build config 00:07:34.172 mempool/octeontx: not in enabled drivers build config 00:07:34.172 mempool/stack: not in enabled drivers build config 00:07:34.172 dma/cnxk: not in enabled drivers build config 00:07:34.172 dma/dpaa: not in enabled drivers build config 00:07:34.172 dma/dpaa2: not in enabled drivers build config 00:07:34.172 dma/hisilicon: not in enabled drivers build config 00:07:34.172 dma/idxd: not in enabled drivers build config 00:07:34.172 dma/ioat: not in enabled drivers build config 00:07:34.172 dma/skeleton: not in enabled drivers build config 00:07:34.172 net/af_packet: not in enabled drivers build config 00:07:34.172 net/af_xdp: not in enabled drivers build config 00:07:34.172 net/ark: not in enabled drivers build config 00:07:34.172 net/atlantic: not in enabled drivers build config 00:07:34.172 net/avp: not in enabled drivers build config 00:07:34.172 net/axgbe: not in enabled drivers build config 00:07:34.172 net/bnx2x: not in enabled drivers build config 00:07:34.172 net/bnxt: not in enabled drivers build config 00:07:34.172 net/bonding: not in enabled drivers build config 00:07:34.172 net/cnxk: not in enabled drivers build config 00:07:34.172 net/cpfl: not in enabled drivers build config 00:07:34.172 net/cxgbe: not in enabled drivers build config 00:07:34.172 net/dpaa: not in enabled drivers build config 00:07:34.172 net/dpaa2: not in enabled drivers build config 00:07:34.172 net/e1000: not in enabled drivers build config 00:07:34.172 net/ena: not in enabled drivers build config 00:07:34.172 net/enetc: not in enabled drivers build config 00:07:34.172 net/enetfec: not in enabled drivers build config 00:07:34.172 net/enic: not in enabled drivers build config 00:07:34.172 net/failsafe: not in enabled drivers build config 00:07:34.172 net/fm10k: not in enabled drivers build config 00:07:34.172 net/gve: not in enabled drivers build config 00:07:34.172 net/hinic: not in enabled drivers build config 00:07:34.172 net/hns3: not in enabled drivers build config 00:07:34.172 net/i40e: not in enabled drivers build config 00:07:34.173 net/iavf: not in enabled drivers build config 00:07:34.173 net/ice: not in enabled drivers build config 00:07:34.173 net/idpf: not in enabled drivers build config 00:07:34.173 net/igc: not in enabled drivers build config 00:07:34.173 net/ionic: not in enabled drivers build config 00:07:34.173 net/ipn3ke: not in enabled drivers build config 00:07:34.173 net/ixgbe: not in enabled drivers build config 00:07:34.173 net/mana: not in enabled drivers build config 00:07:34.173 net/memif: not in enabled drivers build config 00:07:34.173 net/mlx4: not in enabled drivers build config 00:07:34.173 net/mlx5: not in enabled drivers build config 00:07:34.173 net/mvneta: not in enabled drivers build config 00:07:34.173 net/mvpp2: not in enabled drivers build config 00:07:34.173 net/netvsc: not in enabled drivers build config 00:07:34.173 net/nfb: not in enabled drivers build config 00:07:34.173 net/nfp: not in enabled drivers build config 00:07:34.173 net/ngbe: not in enabled drivers build config 00:07:34.173 net/null: not in enabled drivers build config 00:07:34.173 net/octeontx: not in enabled drivers build config 00:07:34.173 net/octeon_ep: not in enabled drivers build config 00:07:34.173 net/pcap: not in enabled drivers build config 00:07:34.173 net/pfe: not in enabled drivers build config 00:07:34.173 net/qede: not in enabled drivers build config 00:07:34.173 net/ring: not in enabled drivers build config 00:07:34.173 net/sfc: not in enabled drivers build config 00:07:34.173 net/softnic: not in enabled drivers build config 00:07:34.173 net/tap: not in enabled drivers build config 00:07:34.173 net/thunderx: not in enabled drivers build config 00:07:34.173 net/txgbe: not in enabled drivers build config 00:07:34.173 net/vdev_netvsc: not in enabled drivers build config 00:07:34.173 net/vhost: not in enabled drivers build config 00:07:34.173 net/virtio: not in enabled drivers build config 00:07:34.173 net/vmxnet3: not in enabled drivers build config 00:07:34.173 raw/*: missing internal dependency, "rawdev" 00:07:34.173 crypto/armv8: not in enabled drivers build config 00:07:34.173 crypto/bcmfs: not in enabled drivers build config 00:07:34.173 crypto/caam_jr: not in enabled drivers build config 00:07:34.173 crypto/ccp: not in enabled drivers build config 00:07:34.173 crypto/cnxk: not in enabled drivers build config 00:07:34.173 crypto/dpaa_sec: not in enabled drivers build config 00:07:34.173 crypto/dpaa2_sec: not in enabled drivers build config 00:07:34.173 crypto/ipsec_mb: not in enabled drivers build config 00:07:34.173 crypto/mlx5: not in enabled drivers build config 00:07:34.173 crypto/mvsam: not in enabled drivers build config 00:07:34.173 crypto/nitrox: not in enabled drivers build config 00:07:34.173 crypto/null: not in enabled drivers build config 00:07:34.173 crypto/octeontx: not in enabled drivers build config 00:07:34.173 crypto/openssl: not in enabled drivers build config 00:07:34.173 crypto/scheduler: not in enabled drivers build config 00:07:34.173 crypto/uadk: not in enabled drivers build config 00:07:34.173 crypto/virtio: not in enabled drivers build config 00:07:34.173 compress/isal: not in enabled drivers build config 00:07:34.173 compress/mlx5: not in enabled drivers build config 00:07:34.173 compress/nitrox: not in enabled drivers build config 00:07:34.173 compress/octeontx: not in enabled drivers build config 00:07:34.173 compress/zlib: not in enabled drivers build config 00:07:34.173 regex/*: missing internal dependency, "regexdev" 00:07:34.173 ml/*: missing internal dependency, "mldev" 00:07:34.173 vdpa/ifc: not in enabled drivers build config 00:07:34.173 vdpa/mlx5: not in enabled drivers build config 00:07:34.173 vdpa/nfp: not in enabled drivers build config 00:07:34.173 vdpa/sfc: not in enabled drivers build config 00:07:34.173 event/*: missing internal dependency, "eventdev" 00:07:34.173 baseband/*: missing internal dependency, "bbdev" 00:07:34.173 gpu/*: missing internal dependency, "gpudev" 00:07:34.173 00:07:34.173 00:07:34.173 Build targets in project: 85 00:07:34.173 00:07:34.173 DPDK 24.03.0 00:07:34.173 00:07:34.173 User defined options 00:07:34.173 buildtype : debug 00:07:34.173 default_library : shared 00:07:34.173 libdir : lib 00:07:34.173 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:34.173 b_sanitize : address 00:07:34.173 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:34.173 c_link_args : 00:07:34.173 cpu_instruction_set: native 00:07:34.173 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:07:34.173 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:07:34.173 enable_docs : false 00:07:34.173 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:07:34.173 enable_kmods : false 00:07:34.173 max_lcores : 128 00:07:34.173 tests : false 00:07:34.173 00:07:34.173 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:34.740 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:07:34.740 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:34.740 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:34.740 [3/268] Linking static target lib/librte_kvargs.a 00:07:34.740 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:34.740 [5/268] Linking static target lib/librte_log.a 00:07:34.740 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:35.306 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:35.306 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:35.306 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:35.306 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:35.306 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:35.306 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:35.306 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:35.306 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:35.306 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:35.564 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:35.564 [17/268] Linking static target lib/librte_telemetry.a 00:07:35.564 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:35.823 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:35.823 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:35.823 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:35.823 [22/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:35.823 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:36.082 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:36.082 [25/268] Linking target lib/librte_log.so.24.1 00:07:36.082 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:36.082 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:36.082 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:36.340 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:36.340 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:36.340 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:36.340 [32/268] Linking target lib/librte_kvargs.so.24.1 00:07:36.340 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:36.340 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:36.598 [35/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:36.598 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:36.598 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:36.598 [38/268] Linking target lib/librte_telemetry.so.24.1 00:07:36.598 [39/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:36.598 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:36.857 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:36.857 [42/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:36.857 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:36.857 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:36.857 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:36.857 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:37.115 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:37.115 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:37.115 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:37.402 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:37.402 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:37.402 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:37.661 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:37.661 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:37.661 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:37.661 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:37.661 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:37.661 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:37.661 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:37.919 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:37.919 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:38.176 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:38.176 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:38.176 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:38.176 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:38.433 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:38.433 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:38.433 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:38.433 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:38.433 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:38.691 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:38.691 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:38.691 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:38.691 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:38.949 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:38.949 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:38.949 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:38.949 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:38.949 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:38.949 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:38.949 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:38.949 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:39.206 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:39.206 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:39.206 [85/268] Linking static target lib/librte_eal.a 00:07:39.206 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:39.206 [87/268] Linking static target lib/librte_ring.a 00:07:39.464 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:39.464 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:39.464 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:39.723 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:39.723 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:39.723 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:39.723 [94/268] Linking static target lib/librte_mempool.a 00:07:39.723 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:39.723 [96/268] Linking static target lib/librte_rcu.a 00:07:39.980 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:39.980 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:39.980 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:39.980 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:40.238 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:40.238 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:40.238 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:40.496 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:40.496 [105/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:40.496 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:40.496 [107/268] Linking static target lib/librte_meter.a 00:07:40.496 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:40.755 [109/268] Linking static target lib/librte_mbuf.a 00:07:40.755 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:40.755 [111/268] Linking static target lib/librte_net.a 00:07:40.755 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:40.755 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:41.013 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:41.013 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:41.272 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:41.272 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:41.272 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:41.530 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:41.530 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:41.530 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:41.788 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:42.045 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:42.045 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:42.303 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:42.303 [126/268] Linking static target lib/librte_pci.a 00:07:42.303 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:42.303 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:42.303 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:42.562 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:42.562 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:42.562 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:42.562 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:42.562 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:42.562 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:42.562 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:42.562 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:42.820 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:42.820 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:42.820 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:42.820 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:42.820 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:42.820 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:42.820 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:42.820 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:43.078 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:43.078 [147/268] Linking static target lib/librte_cmdline.a 00:07:43.078 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:43.078 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:43.338 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:43.338 [151/268] Linking static target lib/librte_timer.a 00:07:43.338 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:43.606 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:43.606 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:43.864 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:43.864 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:43.864 [157/268] Linking static target lib/librte_compressdev.a 00:07:43.864 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:44.121 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:44.121 [160/268] Linking static target lib/librte_ethdev.a 00:07:44.121 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:44.121 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:44.378 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:44.378 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:44.378 [165/268] Linking static target lib/librte_hash.a 00:07:44.637 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:44.637 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:44.637 [168/268] Linking static target lib/librte_dmadev.a 00:07:44.637 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:44.895 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:44.895 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:44.895 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:45.153 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:45.153 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:45.153 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:45.410 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:45.668 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:45.668 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:45.668 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:45.668 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:45.926 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:45.926 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:45.926 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:45.926 [184/268] Linking static target lib/librte_power.a 00:07:45.926 [185/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:45.926 [186/268] Linking static target lib/librte_cryptodev.a 00:07:46.184 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:46.442 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:46.442 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:46.442 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:46.442 [191/268] Linking static target lib/librte_reorder.a 00:07:46.700 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:46.700 [193/268] Linking static target lib/librte_security.a 00:07:47.266 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:47.266 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:47.266 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:47.524 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:47.524 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:47.524 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:47.783 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:47.783 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:48.041 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:48.041 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:48.041 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:48.041 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:48.620 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:48.620 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:48.620 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:48.620 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:48.620 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:48.620 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:48.878 [212/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:48.878 [213/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:48.878 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:48.878 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:48.878 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:48.878 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:48.878 [218/268] Linking static target drivers/librte_bus_vdev.a 00:07:48.878 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:48.878 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:49.136 [221/268] Linking static target drivers/librte_bus_pci.a 00:07:49.136 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:49.136 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:49.136 [224/268] Linking static target drivers/librte_mempool_ring.a 00:07:49.136 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:49.395 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:49.652 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:50.218 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:52.745 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:52.745 [230/268] Linking target lib/librte_eal.so.24.1 00:07:53.003 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:53.003 [232/268] Linking target lib/librte_meter.so.24.1 00:07:53.262 [233/268] Linking target lib/librte_ring.so.24.1 00:07:53.262 [234/268] Linking target lib/librte_pci.so.24.1 00:07:53.262 [235/268] Linking target lib/librte_dmadev.so.24.1 00:07:53.262 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:53.262 [237/268] Linking target lib/librte_timer.so.24.1 00:07:53.262 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:53.262 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:53.262 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:53.262 [241/268] Linking target lib/librte_rcu.so.24.1 00:07:53.528 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:53.528 [243/268] Linking target lib/librte_mempool.so.24.1 00:07:53.528 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:53.528 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:53.528 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:53.528 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:53.785 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:53.785 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:53.785 [250/268] Linking target lib/librte_mbuf.so.24.1 00:07:53.785 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:54.043 [252/268] Linking target lib/librte_reorder.so.24.1 00:07:54.043 [253/268] Linking target lib/librte_compressdev.so.24.1 00:07:54.043 [254/268] Linking target lib/librte_net.so.24.1 00:07:54.043 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:07:54.302 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:54.302 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:54.302 [258/268] Linking target lib/librte_hash.so.24.1 00:07:54.302 [259/268] Linking target lib/librte_security.so.24.1 00:07:54.303 [260/268] Linking target lib/librte_cmdline.so.24.1 00:07:54.303 [261/268] Linking target lib/librte_ethdev.so.24.1 00:07:54.560 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:54.561 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:54.561 [264/268] Linking target lib/librte_power.so.24.1 00:07:56.459 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:56.716 [266/268] Linking static target lib/librte_vhost.a 00:07:58.615 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:58.615 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:58.615 INFO: autodetecting backend as ninja 00:07:58.615 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:08:25.234 CC lib/ut_mock/mock.o 00:08:25.234 CC lib/ut/ut.o 00:08:25.234 CC lib/log/log.o 00:08:25.234 CC lib/log/log_deprecated.o 00:08:25.234 CC lib/log/log_flags.o 00:08:25.234 LIB libspdk_ut_mock.a 00:08:25.234 LIB libspdk_ut.a 00:08:25.234 LIB libspdk_log.a 00:08:25.234 SO libspdk_ut_mock.so.6.0 00:08:25.234 SO libspdk_ut.so.2.0 00:08:25.234 SO libspdk_log.so.7.1 00:08:25.234 SYMLINK libspdk_ut.so 00:08:25.234 SYMLINK libspdk_ut_mock.so 00:08:25.234 SYMLINK libspdk_log.so 00:08:25.234 CC lib/util/bit_array.o 00:08:25.234 CC lib/util/base64.o 00:08:25.234 CC lib/util/cpuset.o 00:08:25.234 CC lib/util/crc16.o 00:08:25.234 CC lib/dma/dma.o 00:08:25.234 CC lib/util/crc32.o 00:08:25.234 CC lib/util/crc32c.o 00:08:25.234 CXX lib/trace_parser/trace.o 00:08:25.234 CC lib/ioat/ioat.o 00:08:25.234 CC lib/util/crc32_ieee.o 00:08:25.234 CC lib/vfio_user/host/vfio_user_pci.o 00:08:25.234 CC lib/vfio_user/host/vfio_user.o 00:08:25.234 CC lib/util/crc64.o 00:08:25.234 CC lib/util/dif.o 00:08:25.234 CC lib/util/fd.o 00:08:25.234 LIB libspdk_dma.a 00:08:25.234 CC lib/util/fd_group.o 00:08:25.234 SO libspdk_dma.so.5.0 00:08:25.234 CC lib/util/file.o 00:08:25.234 CC lib/util/hexlify.o 00:08:25.234 SYMLINK libspdk_dma.so 00:08:25.234 CC lib/util/iov.o 00:08:25.234 CC lib/util/math.o 00:08:25.234 LIB libspdk_ioat.a 00:08:25.234 SO libspdk_ioat.so.7.0 00:08:25.234 LIB libspdk_vfio_user.a 00:08:25.234 CC lib/util/net.o 00:08:25.234 SYMLINK libspdk_ioat.so 00:08:25.234 CC lib/util/pipe.o 00:08:25.234 CC lib/util/strerror_tls.o 00:08:25.234 SO libspdk_vfio_user.so.5.0 00:08:25.234 CC lib/util/string.o 00:08:25.234 CC lib/util/uuid.o 00:08:25.234 CC lib/util/xor.o 00:08:25.234 CC lib/util/zipf.o 00:08:25.234 SYMLINK libspdk_vfio_user.so 00:08:25.234 CC lib/util/md5.o 00:08:25.234 LIB libspdk_util.a 00:08:25.234 SO libspdk_util.so.10.1 00:08:25.234 SYMLINK libspdk_util.so 00:08:25.234 CC lib/conf/conf.o 00:08:25.234 CC lib/rdma_utils/rdma_utils.o 00:08:25.234 CC lib/idxd/idxd.o 00:08:25.234 CC lib/idxd/idxd_user.o 00:08:25.234 CC lib/idxd/idxd_kernel.o 00:08:25.234 CC lib/json/json_parse.o 00:08:25.234 CC lib/json/json_util.o 00:08:25.234 CC lib/vmd/vmd.o 00:08:25.234 CC lib/env_dpdk/env.o 00:08:25.234 LIB libspdk_trace_parser.a 00:08:25.234 SO libspdk_trace_parser.so.6.0 00:08:25.234 CC lib/vmd/led.o 00:08:25.234 SYMLINK libspdk_trace_parser.so 00:08:25.234 CC lib/env_dpdk/memory.o 00:08:25.234 LIB libspdk_conf.a 00:08:25.234 CC lib/json/json_write.o 00:08:25.234 SO libspdk_conf.so.6.0 00:08:25.234 CC lib/env_dpdk/pci.o 00:08:25.234 SYMLINK libspdk_conf.so 00:08:25.234 CC lib/env_dpdk/init.o 00:08:25.234 CC lib/env_dpdk/threads.o 00:08:25.234 CC lib/env_dpdk/pci_ioat.o 00:08:25.234 LIB libspdk_rdma_utils.a 00:08:25.234 SO libspdk_rdma_utils.so.1.0 00:08:25.234 SYMLINK libspdk_rdma_utils.so 00:08:25.234 CC lib/env_dpdk/pci_virtio.o 00:08:25.234 CC lib/env_dpdk/pci_vmd.o 00:08:25.234 CC lib/env_dpdk/pci_idxd.o 00:08:25.234 LIB libspdk_json.a 00:08:25.234 CC lib/env_dpdk/pci_event.o 00:08:25.234 CC lib/env_dpdk/sigbus_handler.o 00:08:25.234 SO libspdk_json.so.6.0 00:08:25.234 LIB libspdk_idxd.a 00:08:25.234 SYMLINK libspdk_json.so 00:08:25.234 CC lib/env_dpdk/pci_dpdk.o 00:08:25.234 LIB libspdk_vmd.a 00:08:25.234 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:25.234 SO libspdk_idxd.so.12.1 00:08:25.234 SO libspdk_vmd.so.6.0 00:08:25.234 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:25.234 SYMLINK libspdk_idxd.so 00:08:25.234 SYMLINK libspdk_vmd.so 00:08:25.234 CC lib/rdma_provider/common.o 00:08:25.234 CC lib/rdma_provider/rdma_provider_verbs.o 00:08:25.234 CC lib/jsonrpc/jsonrpc_client.o 00:08:25.234 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:25.234 CC lib/jsonrpc/jsonrpc_server.o 00:08:25.235 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:25.235 LIB libspdk_rdma_provider.a 00:08:25.235 SO libspdk_rdma_provider.so.7.0 00:08:25.235 LIB libspdk_jsonrpc.a 00:08:25.235 SYMLINK libspdk_rdma_provider.so 00:08:25.235 SO libspdk_jsonrpc.so.6.0 00:08:25.235 SYMLINK libspdk_jsonrpc.so 00:08:25.493 CC lib/rpc/rpc.o 00:08:25.493 LIB libspdk_env_dpdk.a 00:08:25.751 SO libspdk_env_dpdk.so.15.1 00:08:25.751 LIB libspdk_rpc.a 00:08:25.751 SO libspdk_rpc.so.6.0 00:08:25.751 SYMLINK libspdk_rpc.so 00:08:26.009 SYMLINK libspdk_env_dpdk.so 00:08:26.267 CC lib/trace/trace.o 00:08:26.267 CC lib/trace/trace_flags.o 00:08:26.267 CC lib/trace/trace_rpc.o 00:08:26.267 CC lib/notify/notify.o 00:08:26.267 CC lib/notify/notify_rpc.o 00:08:26.267 CC lib/keyring/keyring.o 00:08:26.267 CC lib/keyring/keyring_rpc.o 00:08:26.526 LIB libspdk_notify.a 00:08:26.526 SO libspdk_notify.so.6.0 00:08:26.526 LIB libspdk_trace.a 00:08:26.526 LIB libspdk_keyring.a 00:08:26.526 SYMLINK libspdk_notify.so 00:08:26.526 SO libspdk_keyring.so.2.0 00:08:26.526 SO libspdk_trace.so.11.0 00:08:26.784 SYMLINK libspdk_keyring.so 00:08:26.784 SYMLINK libspdk_trace.so 00:08:27.043 CC lib/sock/sock.o 00:08:27.043 CC lib/sock/sock_rpc.o 00:08:27.043 CC lib/thread/iobuf.o 00:08:27.043 CC lib/thread/thread.o 00:08:27.609 LIB libspdk_sock.a 00:08:27.609 SO libspdk_sock.so.10.0 00:08:27.609 SYMLINK libspdk_sock.so 00:08:28.176 CC lib/nvme/nvme_ctrlr.o 00:08:28.176 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:28.176 CC lib/nvme/nvme_ns_cmd.o 00:08:28.176 CC lib/nvme/nvme_fabric.o 00:08:28.176 CC lib/nvme/nvme_ns.o 00:08:28.176 CC lib/nvme/nvme_pcie.o 00:08:28.176 CC lib/nvme/nvme_pcie_common.o 00:08:28.176 CC lib/nvme/nvme_qpair.o 00:08:28.176 CC lib/nvme/nvme.o 00:08:28.742 LIB libspdk_thread.a 00:08:28.999 CC lib/nvme/nvme_quirks.o 00:08:28.999 SO libspdk_thread.so.11.0 00:08:28.999 CC lib/nvme/nvme_transport.o 00:08:28.999 SYMLINK libspdk_thread.so 00:08:28.999 CC lib/nvme/nvme_discovery.o 00:08:28.999 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:28.999 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:28.999 CC lib/nvme/nvme_tcp.o 00:08:29.258 CC lib/nvme/nvme_opal.o 00:08:29.258 CC lib/nvme/nvme_io_msg.o 00:08:29.259 CC lib/nvme/nvme_poll_group.o 00:08:29.517 CC lib/nvme/nvme_zns.o 00:08:29.517 CC lib/nvme/nvme_stubs.o 00:08:29.517 CC lib/nvme/nvme_auth.o 00:08:29.775 CC lib/nvme/nvme_cuse.o 00:08:29.775 CC lib/nvme/nvme_rdma.o 00:08:30.033 CC lib/accel/accel.o 00:08:30.293 CC lib/blob/blobstore.o 00:08:30.293 CC lib/blob/request.o 00:08:30.293 CC lib/init/json_config.o 00:08:30.293 CC lib/virtio/virtio.o 00:08:30.551 CC lib/init/subsystem.o 00:08:30.551 CC lib/init/subsystem_rpc.o 00:08:30.551 CC lib/virtio/virtio_vhost_user.o 00:08:30.551 CC lib/virtio/virtio_vfio_user.o 00:08:30.810 CC lib/virtio/virtio_pci.o 00:08:30.810 CC lib/init/rpc.o 00:08:30.810 CC lib/accel/accel_rpc.o 00:08:30.810 CC lib/fsdev/fsdev.o 00:08:30.810 CC lib/fsdev/fsdev_io.o 00:08:31.069 LIB libspdk_init.a 00:08:31.069 CC lib/fsdev/fsdev_rpc.o 00:08:31.069 CC lib/accel/accel_sw.o 00:08:31.069 SO libspdk_init.so.6.0 00:08:31.069 CC lib/blob/zeroes.o 00:08:31.069 LIB libspdk_virtio.a 00:08:31.069 SYMLINK libspdk_init.so 00:08:31.069 SO libspdk_virtio.so.7.0 00:08:31.069 CC lib/blob/blob_bs_dev.o 00:08:31.326 SYMLINK libspdk_virtio.so 00:08:31.326 CC lib/event/app.o 00:08:31.326 CC lib/event/log_rpc.o 00:08:31.327 CC lib/event/reactor.o 00:08:31.327 CC lib/event/app_rpc.o 00:08:31.327 LIB libspdk_nvme.a 00:08:31.327 CC lib/event/scheduler_static.o 00:08:31.584 LIB libspdk_accel.a 00:08:31.584 LIB libspdk_fsdev.a 00:08:31.585 SO libspdk_nvme.so.15.0 00:08:31.585 SO libspdk_accel.so.16.0 00:08:31.585 SO libspdk_fsdev.so.2.0 00:08:31.844 SYMLINK libspdk_accel.so 00:08:31.844 SYMLINK libspdk_fsdev.so 00:08:31.844 LIB libspdk_event.a 00:08:31.844 SYMLINK libspdk_nvme.so 00:08:32.102 SO libspdk_event.so.14.0 00:08:32.102 SYMLINK libspdk_event.so 00:08:32.102 CC lib/bdev/bdev.o 00:08:32.102 CC lib/bdev/bdev_rpc.o 00:08:32.102 CC lib/bdev/scsi_nvme.o 00:08:32.102 CC lib/bdev/bdev_zone.o 00:08:32.102 CC lib/bdev/part.o 00:08:32.102 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:08:33.062 LIB libspdk_fuse_dispatcher.a 00:08:33.062 SO libspdk_fuse_dispatcher.so.1.0 00:08:33.062 SYMLINK libspdk_fuse_dispatcher.so 00:08:34.966 LIB libspdk_blob.a 00:08:34.966 SO libspdk_blob.so.11.0 00:08:34.966 SYMLINK libspdk_blob.so 00:08:35.224 CC lib/blobfs/blobfs.o 00:08:35.224 CC lib/blobfs/tree.o 00:08:35.224 CC lib/lvol/lvol.o 00:08:35.483 LIB libspdk_bdev.a 00:08:35.483 SO libspdk_bdev.so.17.0 00:08:35.743 SYMLINK libspdk_bdev.so 00:08:36.001 CC lib/ftl/ftl_core.o 00:08:36.001 CC lib/ftl/ftl_layout.o 00:08:36.001 CC lib/ftl/ftl_init.o 00:08:36.001 CC lib/ftl/ftl_debug.o 00:08:36.001 CC lib/scsi/dev.o 00:08:36.001 CC lib/ublk/ublk.o 00:08:36.001 CC lib/nbd/nbd.o 00:08:36.001 CC lib/nvmf/ctrlr.o 00:08:36.001 CC lib/ublk/ublk_rpc.o 00:08:36.260 CC lib/nvmf/ctrlr_discovery.o 00:08:36.260 LIB libspdk_blobfs.a 00:08:36.260 CC lib/scsi/lun.o 00:08:36.260 SO libspdk_blobfs.so.10.0 00:08:36.260 CC lib/scsi/port.o 00:08:36.260 SYMLINK libspdk_blobfs.so 00:08:36.260 CC lib/scsi/scsi.o 00:08:36.260 LIB libspdk_lvol.a 00:08:36.260 CC lib/scsi/scsi_bdev.o 00:08:36.260 CC lib/ftl/ftl_io.o 00:08:36.260 SO libspdk_lvol.so.10.0 00:08:36.518 CC lib/nbd/nbd_rpc.o 00:08:36.518 SYMLINK libspdk_lvol.so 00:08:36.518 CC lib/ftl/ftl_sb.o 00:08:36.518 CC lib/scsi/scsi_pr.o 00:08:36.519 CC lib/ftl/ftl_l2p.o 00:08:36.519 CC lib/scsi/scsi_rpc.o 00:08:36.519 LIB libspdk_nbd.a 00:08:36.778 SO libspdk_nbd.so.7.0 00:08:36.778 LIB libspdk_ublk.a 00:08:36.778 CC lib/scsi/task.o 00:08:36.778 CC lib/ftl/ftl_l2p_flat.o 00:08:36.778 SO libspdk_ublk.so.3.0 00:08:36.778 CC lib/ftl/ftl_nv_cache.o 00:08:36.778 SYMLINK libspdk_nbd.so 00:08:36.778 CC lib/ftl/ftl_band.o 00:08:36.778 CC lib/nvmf/ctrlr_bdev.o 00:08:36.778 CC lib/nvmf/subsystem.o 00:08:36.778 SYMLINK libspdk_ublk.so 00:08:36.778 CC lib/nvmf/nvmf.o 00:08:37.036 CC lib/ftl/ftl_band_ops.o 00:08:37.036 CC lib/nvmf/nvmf_rpc.o 00:08:37.036 CC lib/ftl/ftl_writer.o 00:08:37.036 LIB libspdk_scsi.a 00:08:37.036 SO libspdk_scsi.so.9.0 00:08:37.314 SYMLINK libspdk_scsi.so 00:08:37.314 CC lib/ftl/ftl_rq.o 00:08:37.314 CC lib/ftl/ftl_reloc.o 00:08:37.314 CC lib/ftl/ftl_l2p_cache.o 00:08:37.314 CC lib/ftl/ftl_p2l.o 00:08:37.573 CC lib/iscsi/conn.o 00:08:37.573 CC lib/iscsi/init_grp.o 00:08:37.573 CC lib/ftl/ftl_p2l_log.o 00:08:37.832 CC lib/nvmf/transport.o 00:08:37.832 CC lib/nvmf/tcp.o 00:08:38.092 CC lib/nvmf/stubs.o 00:08:38.092 CC lib/nvmf/mdns_server.o 00:08:38.092 CC lib/ftl/mngt/ftl_mngt.o 00:08:38.092 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:38.092 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:38.352 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:38.352 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:38.352 CC lib/iscsi/iscsi.o 00:08:38.352 CC lib/nvmf/rdma.o 00:08:38.352 CC lib/iscsi/param.o 00:08:38.352 CC lib/vhost/vhost.o 00:08:38.352 CC lib/iscsi/portal_grp.o 00:08:38.610 CC lib/iscsi/tgt_node.o 00:08:38.610 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:38.610 CC lib/vhost/vhost_rpc.o 00:08:38.610 CC lib/iscsi/iscsi_subsystem.o 00:08:38.868 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:38.868 CC lib/vhost/vhost_scsi.o 00:08:38.868 CC lib/iscsi/iscsi_rpc.o 00:08:39.127 CC lib/iscsi/task.o 00:08:39.128 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:39.128 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:39.128 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:39.128 CC lib/vhost/vhost_blk.o 00:08:39.387 CC lib/vhost/rte_vhost_user.o 00:08:39.387 CC lib/nvmf/auth.o 00:08:39.387 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:39.387 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:39.645 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:39.645 CC lib/ftl/utils/ftl_conf.o 00:08:39.646 CC lib/ftl/utils/ftl_md.o 00:08:39.904 CC lib/ftl/utils/ftl_mempool.o 00:08:39.904 CC lib/ftl/utils/ftl_bitmap.o 00:08:39.904 CC lib/ftl/utils/ftl_property.o 00:08:39.904 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:39.904 LIB libspdk_iscsi.a 00:08:39.904 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:40.163 SO libspdk_iscsi.so.8.0 00:08:40.163 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:40.163 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:40.163 SYMLINK libspdk_iscsi.so 00:08:40.163 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:40.164 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:40.164 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:08:40.422 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:40.422 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:40.422 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:40.422 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:40.422 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:08:40.422 LIB libspdk_vhost.a 00:08:40.422 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:08:40.422 CC lib/ftl/base/ftl_base_dev.o 00:08:40.422 CC lib/ftl/base/ftl_base_bdev.o 00:08:40.422 SO libspdk_vhost.so.8.0 00:08:40.680 CC lib/ftl/ftl_trace.o 00:08:40.680 SYMLINK libspdk_vhost.so 00:08:40.938 LIB libspdk_ftl.a 00:08:40.938 LIB libspdk_nvmf.a 00:08:41.197 SO libspdk_ftl.so.9.0 00:08:41.197 SO libspdk_nvmf.so.20.0 00:08:41.455 SYMLINK libspdk_ftl.so 00:08:41.455 SYMLINK libspdk_nvmf.so 00:08:42.021 CC module/env_dpdk/env_dpdk_rpc.o 00:08:42.021 CC module/blob/bdev/blob_bdev.o 00:08:42.021 CC module/keyring/linux/keyring.o 00:08:42.021 CC module/accel/dsa/accel_dsa.o 00:08:42.021 CC module/accel/error/accel_error.o 00:08:42.021 CC module/accel/ioat/accel_ioat.o 00:08:42.021 CC module/keyring/file/keyring.o 00:08:42.021 CC module/fsdev/aio/fsdev_aio.o 00:08:42.279 CC module/sock/posix/posix.o 00:08:42.279 LIB libspdk_env_dpdk_rpc.a 00:08:42.279 CC module/scheduler/dynamic/scheduler_dynamic.o 00:08:42.279 SO libspdk_env_dpdk_rpc.so.6.0 00:08:42.279 SYMLINK libspdk_env_dpdk_rpc.so 00:08:42.279 CC module/fsdev/aio/fsdev_aio_rpc.o 00:08:42.279 CC module/keyring/linux/keyring_rpc.o 00:08:42.279 CC module/accel/ioat/accel_ioat_rpc.o 00:08:42.279 CC module/keyring/file/keyring_rpc.o 00:08:42.279 CC module/accel/error/accel_error_rpc.o 00:08:42.537 LIB libspdk_scheduler_dynamic.a 00:08:42.537 LIB libspdk_blob_bdev.a 00:08:42.537 SO libspdk_scheduler_dynamic.so.4.0 00:08:42.537 CC module/accel/dsa/accel_dsa_rpc.o 00:08:42.537 SO libspdk_blob_bdev.so.11.0 00:08:42.537 SYMLINK libspdk_scheduler_dynamic.so 00:08:42.537 LIB libspdk_keyring_linux.a 00:08:42.537 SYMLINK libspdk_blob_bdev.so 00:08:42.537 LIB libspdk_accel_ioat.a 00:08:42.537 LIB libspdk_keyring_file.a 00:08:42.537 LIB libspdk_accel_error.a 00:08:42.537 SO libspdk_keyring_linux.so.1.0 00:08:42.537 SO libspdk_accel_ioat.so.6.0 00:08:42.537 SO libspdk_keyring_file.so.2.0 00:08:42.537 SO libspdk_accel_error.so.2.0 00:08:42.537 LIB libspdk_accel_dsa.a 00:08:42.795 SYMLINK libspdk_keyring_linux.so 00:08:42.795 SO libspdk_accel_dsa.so.5.0 00:08:42.795 SYMLINK libspdk_accel_ioat.so 00:08:42.795 SYMLINK libspdk_keyring_file.so 00:08:42.795 CC module/fsdev/aio/linux_aio_mgr.o 00:08:42.795 SYMLINK libspdk_accel_error.so 00:08:42.795 SYMLINK libspdk_accel_dsa.so 00:08:42.795 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:08:42.795 CC module/accel/iaa/accel_iaa.o 00:08:42.795 CC module/accel/iaa/accel_iaa_rpc.o 00:08:42.795 CC module/scheduler/gscheduler/gscheduler.o 00:08:43.053 LIB libspdk_fsdev_aio.a 00:08:43.053 LIB libspdk_scheduler_dpdk_governor.a 00:08:43.053 LIB libspdk_accel_iaa.a 00:08:43.053 SO libspdk_fsdev_aio.so.1.0 00:08:43.053 CC module/bdev/delay/vbdev_delay.o 00:08:43.053 CC module/bdev/error/vbdev_error.o 00:08:43.053 SO libspdk_scheduler_dpdk_governor.so.4.0 00:08:43.053 CC module/blobfs/bdev/blobfs_bdev.o 00:08:43.053 LIB libspdk_scheduler_gscheduler.a 00:08:43.053 SO libspdk_accel_iaa.so.3.0 00:08:43.053 SO libspdk_scheduler_gscheduler.so.4.0 00:08:43.053 LIB libspdk_sock_posix.a 00:08:43.053 CC module/bdev/gpt/gpt.o 00:08:43.053 SYMLINK libspdk_scheduler_dpdk_governor.so 00:08:43.053 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:08:43.053 SO libspdk_sock_posix.so.6.0 00:08:43.053 SYMLINK libspdk_fsdev_aio.so 00:08:43.053 SYMLINK libspdk_accel_iaa.so 00:08:43.053 SYMLINK libspdk_scheduler_gscheduler.so 00:08:43.053 CC module/bdev/error/vbdev_error_rpc.o 00:08:43.053 CC module/bdev/gpt/vbdev_gpt.o 00:08:43.311 CC module/bdev/lvol/vbdev_lvol.o 00:08:43.311 SYMLINK libspdk_sock_posix.so 00:08:43.311 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:08:43.311 LIB libspdk_blobfs_bdev.a 00:08:43.311 CC module/bdev/delay/vbdev_delay_rpc.o 00:08:43.311 SO libspdk_blobfs_bdev.so.6.0 00:08:43.311 LIB libspdk_bdev_error.a 00:08:43.311 CC module/bdev/malloc/bdev_malloc.o 00:08:43.311 SO libspdk_bdev_error.so.6.0 00:08:43.311 SYMLINK libspdk_blobfs_bdev.so 00:08:43.569 CC module/bdev/null/bdev_null.o 00:08:43.569 LIB libspdk_bdev_gpt.a 00:08:43.569 SYMLINK libspdk_bdev_error.so 00:08:43.569 SO libspdk_bdev_gpt.so.6.0 00:08:43.569 CC module/bdev/nvme/bdev_nvme.o 00:08:43.569 LIB libspdk_bdev_delay.a 00:08:43.569 SYMLINK libspdk_bdev_gpt.so 00:08:43.569 SO libspdk_bdev_delay.so.6.0 00:08:43.569 CC module/bdev/passthru/vbdev_passthru.o 00:08:43.569 CC module/bdev/raid/bdev_raid.o 00:08:43.569 CC module/bdev/split/vbdev_split.o 00:08:43.569 CC module/bdev/split/vbdev_split_rpc.o 00:08:43.828 SYMLINK libspdk_bdev_delay.so 00:08:43.828 CC module/bdev/null/bdev_null_rpc.o 00:08:43.828 CC module/bdev/zone_block/vbdev_zone_block.o 00:08:43.828 LIB libspdk_bdev_lvol.a 00:08:43.828 CC module/bdev/malloc/bdev_malloc_rpc.o 00:08:43.828 SO libspdk_bdev_lvol.so.6.0 00:08:43.828 SYMLINK libspdk_bdev_lvol.so 00:08:43.828 CC module/bdev/raid/bdev_raid_rpc.o 00:08:44.087 LIB libspdk_bdev_null.a 00:08:44.087 LIB libspdk_bdev_split.a 00:08:44.087 CC module/bdev/xnvme/bdev_xnvme.o 00:08:44.087 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:08:44.087 LIB libspdk_bdev_malloc.a 00:08:44.087 SO libspdk_bdev_null.so.6.0 00:08:44.087 SO libspdk_bdev_split.so.6.0 00:08:44.087 SO libspdk_bdev_malloc.so.6.0 00:08:44.087 SYMLINK libspdk_bdev_split.so 00:08:44.087 SYMLINK libspdk_bdev_null.so 00:08:44.087 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:08:44.087 CC module/bdev/aio/bdev_aio.o 00:08:44.087 SYMLINK libspdk_bdev_malloc.so 00:08:44.087 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:08:44.087 CC module/bdev/aio/bdev_aio_rpc.o 00:08:44.087 LIB libspdk_bdev_passthru.a 00:08:44.087 CC module/bdev/raid/bdev_raid_sb.o 00:08:44.087 SO libspdk_bdev_passthru.so.6.0 00:08:44.346 CC module/bdev/raid/raid0.o 00:08:44.346 CC module/bdev/ftl/bdev_ftl.o 00:08:44.346 SYMLINK libspdk_bdev_passthru.so 00:08:44.346 LIB libspdk_bdev_xnvme.a 00:08:44.346 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:44.346 CC module/bdev/raid/raid1.o 00:08:44.346 SO libspdk_bdev_xnvme.so.3.0 00:08:44.346 LIB libspdk_bdev_zone_block.a 00:08:44.346 SO libspdk_bdev_zone_block.so.6.0 00:08:44.346 SYMLINK libspdk_bdev_xnvme.so 00:08:44.346 CC module/bdev/raid/concat.o 00:08:44.346 SYMLINK libspdk_bdev_zone_block.so 00:08:44.346 CC module/bdev/nvme/bdev_nvme_rpc.o 00:08:44.606 CC module/bdev/nvme/nvme_rpc.o 00:08:44.606 LIB libspdk_bdev_aio.a 00:08:44.606 CC module/bdev/nvme/bdev_mdns_client.o 00:08:44.606 SO libspdk_bdev_aio.so.6.0 00:08:44.606 LIB libspdk_bdev_ftl.a 00:08:44.606 CC module/bdev/iscsi/bdev_iscsi.o 00:08:44.606 SO libspdk_bdev_ftl.so.6.0 00:08:44.606 CC module/bdev/nvme/vbdev_opal.o 00:08:44.606 CC module/bdev/nvme/vbdev_opal_rpc.o 00:08:44.606 SYMLINK libspdk_bdev_aio.so 00:08:44.606 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:08:44.606 SYMLINK libspdk_bdev_ftl.so 00:08:44.606 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:44.864 LIB libspdk_bdev_raid.a 00:08:44.864 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:44.864 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:44.864 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:44.864 SO libspdk_bdev_raid.so.6.0 00:08:45.122 SYMLINK libspdk_bdev_raid.so 00:08:45.122 LIB libspdk_bdev_iscsi.a 00:08:45.122 SO libspdk_bdev_iscsi.so.6.0 00:08:45.381 SYMLINK libspdk_bdev_iscsi.so 00:08:45.640 LIB libspdk_bdev_virtio.a 00:08:45.640 SO libspdk_bdev_virtio.so.6.0 00:08:45.640 SYMLINK libspdk_bdev_virtio.so 00:08:46.619 LIB libspdk_bdev_nvme.a 00:08:46.878 SO libspdk_bdev_nvme.so.7.1 00:08:46.878 SYMLINK libspdk_bdev_nvme.so 00:08:47.813 CC module/event/subsystems/sock/sock.o 00:08:47.813 CC module/event/subsystems/vmd/vmd.o 00:08:47.813 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:47.813 CC module/event/subsystems/fsdev/fsdev.o 00:08:47.813 CC module/event/subsystems/scheduler/scheduler.o 00:08:47.813 CC module/event/subsystems/iobuf/iobuf.o 00:08:47.813 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:47.813 CC module/event/subsystems/keyring/keyring.o 00:08:47.813 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:47.813 LIB libspdk_event_scheduler.a 00:08:47.813 LIB libspdk_event_keyring.a 00:08:47.813 LIB libspdk_event_sock.a 00:08:47.813 LIB libspdk_event_fsdev.a 00:08:47.813 LIB libspdk_event_vmd.a 00:08:47.813 SO libspdk_event_scheduler.so.4.0 00:08:47.813 LIB libspdk_event_vhost_blk.a 00:08:47.813 SO libspdk_event_sock.so.5.0 00:08:47.813 SO libspdk_event_keyring.so.1.0 00:08:47.813 SO libspdk_event_fsdev.so.1.0 00:08:47.813 LIB libspdk_event_iobuf.a 00:08:47.813 SO libspdk_event_vhost_blk.so.3.0 00:08:47.813 SO libspdk_event_vmd.so.6.0 00:08:47.813 SYMLINK libspdk_event_sock.so 00:08:47.813 SYMLINK libspdk_event_scheduler.so 00:08:47.813 SYMLINK libspdk_event_keyring.so 00:08:47.813 SO libspdk_event_iobuf.so.3.0 00:08:47.813 SYMLINK libspdk_event_vhost_blk.so 00:08:47.813 SYMLINK libspdk_event_fsdev.so 00:08:47.813 SYMLINK libspdk_event_vmd.so 00:08:47.813 SYMLINK libspdk_event_iobuf.so 00:08:48.381 CC module/event/subsystems/accel/accel.o 00:08:48.640 LIB libspdk_event_accel.a 00:08:48.640 SO libspdk_event_accel.so.6.0 00:08:48.640 SYMLINK libspdk_event_accel.so 00:08:49.208 CC module/event/subsystems/bdev/bdev.o 00:08:49.208 LIB libspdk_event_bdev.a 00:08:49.208 SO libspdk_event_bdev.so.6.0 00:08:49.467 SYMLINK libspdk_event_bdev.so 00:08:49.725 CC module/event/subsystems/nbd/nbd.o 00:08:49.726 CC module/event/subsystems/ublk/ublk.o 00:08:49.726 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:49.726 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:49.726 CC module/event/subsystems/scsi/scsi.o 00:08:49.983 LIB libspdk_event_ublk.a 00:08:49.983 LIB libspdk_event_nbd.a 00:08:49.983 LIB libspdk_event_scsi.a 00:08:49.983 SO libspdk_event_ublk.so.3.0 00:08:49.983 SO libspdk_event_nbd.so.6.0 00:08:49.983 SO libspdk_event_scsi.so.6.0 00:08:49.983 LIB libspdk_event_nvmf.a 00:08:49.983 SYMLINK libspdk_event_ublk.so 00:08:49.983 SYMLINK libspdk_event_nbd.so 00:08:49.983 SO libspdk_event_nvmf.so.6.0 00:08:49.983 SYMLINK libspdk_event_scsi.so 00:08:50.242 SYMLINK libspdk_event_nvmf.so 00:08:50.242 CC module/event/subsystems/iscsi/iscsi.o 00:08:50.242 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:50.500 LIB libspdk_event_iscsi.a 00:08:50.500 LIB libspdk_event_vhost_scsi.a 00:08:50.500 SO libspdk_event_iscsi.so.6.0 00:08:50.758 SO libspdk_event_vhost_scsi.so.3.0 00:08:50.758 SYMLINK libspdk_event_iscsi.so 00:08:50.758 SYMLINK libspdk_event_vhost_scsi.so 00:08:51.017 SO libspdk.so.6.0 00:08:51.017 SYMLINK libspdk.so 00:08:51.275 CXX app/trace/trace.o 00:08:51.275 CC app/trace_record/trace_record.o 00:08:51.275 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:51.275 CC app/iscsi_tgt/iscsi_tgt.o 00:08:51.275 CC app/nvmf_tgt/nvmf_main.o 00:08:51.275 CC examples/ioat/perf/perf.o 00:08:51.275 CC examples/util/zipf/zipf.o 00:08:51.275 CC test/thread/poller_perf/poller_perf.o 00:08:51.275 CC test/app/bdev_svc/bdev_svc.o 00:08:51.533 CC test/dma/test_dma/test_dma.o 00:08:51.533 LINK zipf 00:08:51.533 LINK nvmf_tgt 00:08:51.533 LINK poller_perf 00:08:51.533 LINK iscsi_tgt 00:08:51.533 LINK spdk_trace_record 00:08:51.533 LINK interrupt_tgt 00:08:51.533 LINK ioat_perf 00:08:51.533 LINK bdev_svc 00:08:51.789 LINK spdk_trace 00:08:51.789 CC test/app/histogram_perf/histogram_perf.o 00:08:52.047 CC test/app/stub/stub.o 00:08:52.047 CC test/app/jsoncat/jsoncat.o 00:08:52.047 CC examples/ioat/verify/verify.o 00:08:52.047 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:52.047 CC app/spdk_tgt/spdk_tgt.o 00:08:52.047 TEST_HEADER include/spdk/accel.h 00:08:52.047 TEST_HEADER include/spdk/accel_module.h 00:08:52.047 TEST_HEADER include/spdk/assert.h 00:08:52.047 TEST_HEADER include/spdk/barrier.h 00:08:52.047 TEST_HEADER include/spdk/base64.h 00:08:52.047 TEST_HEADER include/spdk/bdev.h 00:08:52.047 TEST_HEADER include/spdk/bdev_module.h 00:08:52.047 TEST_HEADER include/spdk/bdev_zone.h 00:08:52.047 TEST_HEADER include/spdk/bit_array.h 00:08:52.047 TEST_HEADER include/spdk/bit_pool.h 00:08:52.047 TEST_HEADER include/spdk/blob_bdev.h 00:08:52.047 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:52.047 TEST_HEADER include/spdk/blobfs.h 00:08:52.047 LINK jsoncat 00:08:52.047 TEST_HEADER include/spdk/blob.h 00:08:52.047 TEST_HEADER include/spdk/conf.h 00:08:52.047 TEST_HEADER include/spdk/config.h 00:08:52.047 TEST_HEADER include/spdk/cpuset.h 00:08:52.047 TEST_HEADER include/spdk/crc16.h 00:08:52.047 LINK histogram_perf 00:08:52.047 TEST_HEADER include/spdk/crc32.h 00:08:52.047 TEST_HEADER include/spdk/crc64.h 00:08:52.047 TEST_HEADER include/spdk/dif.h 00:08:52.047 TEST_HEADER include/spdk/dma.h 00:08:52.047 TEST_HEADER include/spdk/endian.h 00:08:52.047 TEST_HEADER include/spdk/env_dpdk.h 00:08:52.047 TEST_HEADER include/spdk/env.h 00:08:52.047 TEST_HEADER include/spdk/event.h 00:08:52.047 TEST_HEADER include/spdk/fd_group.h 00:08:52.047 TEST_HEADER include/spdk/fd.h 00:08:52.047 LINK test_dma 00:08:52.047 TEST_HEADER include/spdk/file.h 00:08:52.047 TEST_HEADER include/spdk/fsdev.h 00:08:52.047 TEST_HEADER include/spdk/fsdev_module.h 00:08:52.047 TEST_HEADER include/spdk/ftl.h 00:08:52.047 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:52.047 TEST_HEADER include/spdk/gpt_spec.h 00:08:52.047 TEST_HEADER include/spdk/hexlify.h 00:08:52.047 TEST_HEADER include/spdk/histogram_data.h 00:08:52.047 TEST_HEADER include/spdk/idxd.h 00:08:52.047 CC examples/sock/hello_world/hello_sock.o 00:08:52.047 LINK stub 00:08:52.047 TEST_HEADER include/spdk/idxd_spec.h 00:08:52.047 TEST_HEADER include/spdk/init.h 00:08:52.047 TEST_HEADER include/spdk/ioat.h 00:08:52.047 TEST_HEADER include/spdk/ioat_spec.h 00:08:52.047 TEST_HEADER include/spdk/iscsi_spec.h 00:08:52.358 TEST_HEADER include/spdk/json.h 00:08:52.358 TEST_HEADER include/spdk/jsonrpc.h 00:08:52.358 TEST_HEADER include/spdk/keyring.h 00:08:52.358 CC examples/thread/thread/thread_ex.o 00:08:52.358 TEST_HEADER include/spdk/keyring_module.h 00:08:52.358 TEST_HEADER include/spdk/likely.h 00:08:52.358 TEST_HEADER include/spdk/log.h 00:08:52.358 TEST_HEADER include/spdk/lvol.h 00:08:52.358 TEST_HEADER include/spdk/md5.h 00:08:52.358 TEST_HEADER include/spdk/memory.h 00:08:52.358 TEST_HEADER include/spdk/mmio.h 00:08:52.358 TEST_HEADER include/spdk/nbd.h 00:08:52.358 TEST_HEADER include/spdk/net.h 00:08:52.358 TEST_HEADER include/spdk/notify.h 00:08:52.358 TEST_HEADER include/spdk/nvme.h 00:08:52.358 TEST_HEADER include/spdk/nvme_intel.h 00:08:52.358 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:52.358 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:52.358 TEST_HEADER include/spdk/nvme_spec.h 00:08:52.358 TEST_HEADER include/spdk/nvme_zns.h 00:08:52.358 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:52.358 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:52.358 TEST_HEADER include/spdk/nvmf.h 00:08:52.358 TEST_HEADER include/spdk/nvmf_spec.h 00:08:52.358 TEST_HEADER include/spdk/nvmf_transport.h 00:08:52.358 TEST_HEADER include/spdk/opal.h 00:08:52.358 TEST_HEADER include/spdk/opal_spec.h 00:08:52.358 TEST_HEADER include/spdk/pci_ids.h 00:08:52.358 TEST_HEADER include/spdk/pipe.h 00:08:52.358 LINK spdk_tgt 00:08:52.358 TEST_HEADER include/spdk/queue.h 00:08:52.358 TEST_HEADER include/spdk/reduce.h 00:08:52.358 TEST_HEADER include/spdk/rpc.h 00:08:52.358 TEST_HEADER include/spdk/scheduler.h 00:08:52.358 TEST_HEADER include/spdk/scsi.h 00:08:52.358 LINK verify 00:08:52.358 TEST_HEADER include/spdk/scsi_spec.h 00:08:52.358 TEST_HEADER include/spdk/sock.h 00:08:52.358 TEST_HEADER include/spdk/stdinc.h 00:08:52.358 TEST_HEADER include/spdk/string.h 00:08:52.358 TEST_HEADER include/spdk/thread.h 00:08:52.358 TEST_HEADER include/spdk/trace.h 00:08:52.358 TEST_HEADER include/spdk/trace_parser.h 00:08:52.358 TEST_HEADER include/spdk/tree.h 00:08:52.358 TEST_HEADER include/spdk/ublk.h 00:08:52.358 TEST_HEADER include/spdk/util.h 00:08:52.358 TEST_HEADER include/spdk/uuid.h 00:08:52.358 TEST_HEADER include/spdk/version.h 00:08:52.358 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:52.358 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:52.358 TEST_HEADER include/spdk/vhost.h 00:08:52.358 TEST_HEADER include/spdk/vmd.h 00:08:52.358 TEST_HEADER include/spdk/xor.h 00:08:52.358 TEST_HEADER include/spdk/zipf.h 00:08:52.358 CXX test/cpp_headers/accel.o 00:08:52.359 LINK hello_sock 00:08:52.617 CXX test/cpp_headers/accel_module.o 00:08:52.617 CC examples/vmd/lsvmd/lsvmd.o 00:08:52.617 CC examples/vmd/led/led.o 00:08:52.617 LINK thread 00:08:52.617 CC test/env/mem_callbacks/mem_callbacks.o 00:08:52.617 LINK nvme_fuzz 00:08:52.617 CXX test/cpp_headers/assert.o 00:08:52.617 CC app/spdk_lspci/spdk_lspci.o 00:08:52.617 LINK lsvmd 00:08:52.617 LINK led 00:08:52.617 CC test/event/event_perf/event_perf.o 00:08:52.877 CC examples/idxd/perf/perf.o 00:08:52.877 CC app/spdk_nvme_perf/perf.o 00:08:52.877 CXX test/cpp_headers/barrier.o 00:08:52.877 CXX test/cpp_headers/base64.o 00:08:52.877 LINK spdk_lspci 00:08:52.877 CC app/spdk_nvme_identify/identify.o 00:08:52.877 CXX test/cpp_headers/bdev.o 00:08:52.877 LINK event_perf 00:08:52.877 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:53.135 CC app/spdk_top/spdk_top.o 00:08:53.135 CXX test/cpp_headers/bdev_module.o 00:08:53.135 CC app/spdk_nvme_discover/discovery_aer.o 00:08:53.135 LINK mem_callbacks 00:08:53.135 LINK idxd_perf 00:08:53.135 CC test/event/reactor/reactor.o 00:08:53.393 CC examples/nvme/hello_world/hello_world.o 00:08:53.393 LINK reactor 00:08:53.393 CXX test/cpp_headers/bdev_zone.o 00:08:53.393 CC test/env/vtophys/vtophys.o 00:08:53.393 LINK spdk_nvme_discover 00:08:53.652 CC app/vhost/vhost.o 00:08:53.652 LINK hello_world 00:08:53.652 CXX test/cpp_headers/bit_array.o 00:08:53.652 LINK vtophys 00:08:53.652 CC test/event/reactor_perf/reactor_perf.o 00:08:53.910 LINK vhost 00:08:53.910 LINK reactor_perf 00:08:53.910 CXX test/cpp_headers/bit_pool.o 00:08:53.910 LINK spdk_nvme_perf 00:08:53.910 CC examples/nvme/reconnect/reconnect.o 00:08:53.910 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:53.910 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:54.168 CXX test/cpp_headers/blob_bdev.o 00:08:54.168 LINK spdk_nvme_identify 00:08:54.168 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:54.168 LINK env_dpdk_post_init 00:08:54.168 CC test/event/app_repeat/app_repeat.o 00:08:54.168 CXX test/cpp_headers/blobfs_bdev.o 00:08:54.168 CC app/spdk_dd/spdk_dd.o 00:08:54.168 LINK spdk_top 00:08:54.426 LINK hello_fsdev 00:08:54.426 LINK reconnect 00:08:54.426 LINK app_repeat 00:08:54.426 CC examples/nvme/arbitration/arbitration.o 00:08:54.426 CXX test/cpp_headers/blobfs.o 00:08:54.426 CC test/env/memory/memory_ut.o 00:08:54.684 CC examples/nvme/hotplug/hotplug.o 00:08:54.684 CXX test/cpp_headers/blob.o 00:08:54.684 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:54.684 LINK spdk_dd 00:08:54.684 CC examples/accel/perf/accel_perf.o 00:08:54.684 CC test/event/scheduler/scheduler.o 00:08:54.684 LINK nvme_manage 00:08:54.942 LINK arbitration 00:08:54.942 LINK hotplug 00:08:54.942 CXX test/cpp_headers/conf.o 00:08:54.942 LINK cmb_copy 00:08:54.942 LINK scheduler 00:08:54.942 CXX test/cpp_headers/config.o 00:08:54.942 CXX test/cpp_headers/cpuset.o 00:08:54.942 CXX test/cpp_headers/crc16.o 00:08:55.200 LINK iscsi_fuzz 00:08:55.200 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:55.200 CC app/fio/nvme/fio_plugin.o 00:08:55.200 CC examples/nvme/abort/abort.o 00:08:55.200 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:55.200 CXX test/cpp_headers/crc32.o 00:08:55.200 CC examples/blob/hello_world/hello_blob.o 00:08:55.457 CXX test/cpp_headers/crc64.o 00:08:55.457 LINK accel_perf 00:08:55.457 CC app/fio/bdev/fio_plugin.o 00:08:55.457 CC test/env/pci/pci_ut.o 00:08:55.457 LINK hello_blob 00:08:55.457 CXX test/cpp_headers/dif.o 00:08:55.713 CC examples/blob/cli/blobcli.o 00:08:55.713 LINK abort 00:08:55.713 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:55.713 CXX test/cpp_headers/dma.o 00:08:55.713 LINK vhost_fuzz 00:08:55.713 CXX test/cpp_headers/endian.o 00:08:55.971 CXX test/cpp_headers/env_dpdk.o 00:08:55.971 LINK memory_ut 00:08:55.971 LINK pmr_persistence 00:08:55.971 LINK spdk_nvme 00:08:55.971 LINK pci_ut 00:08:55.971 LINK spdk_bdev 00:08:55.971 CXX test/cpp_headers/env.o 00:08:56.229 CC test/rpc_client/rpc_client_test.o 00:08:56.229 CC test/nvme/aer/aer.o 00:08:56.229 LINK blobcli 00:08:56.229 CC examples/bdev/hello_world/hello_bdev.o 00:08:56.229 CC examples/bdev/bdevperf/bdevperf.o 00:08:56.229 CC test/accel/dif/dif.o 00:08:56.229 CXX test/cpp_headers/event.o 00:08:56.229 LINK rpc_client_test 00:08:56.229 CC test/blobfs/mkfs/mkfs.o 00:08:56.487 CXX test/cpp_headers/fd_group.o 00:08:56.487 CXX test/cpp_headers/fd.o 00:08:56.487 LINK hello_bdev 00:08:56.487 CC test/lvol/esnap/esnap.o 00:08:56.487 CXX test/cpp_headers/file.o 00:08:56.487 LINK aer 00:08:56.487 CXX test/cpp_headers/fsdev.o 00:08:56.487 CXX test/cpp_headers/fsdev_module.o 00:08:56.487 LINK mkfs 00:08:56.487 CXX test/cpp_headers/ftl.o 00:08:56.745 CXX test/cpp_headers/fuse_dispatcher.o 00:08:56.745 CC test/nvme/reset/reset.o 00:08:56.745 CXX test/cpp_headers/gpt_spec.o 00:08:56.745 CC test/nvme/sgl/sgl.o 00:08:56.745 CC test/nvme/e2edp/nvme_dp.o 00:08:56.745 CXX test/cpp_headers/hexlify.o 00:08:56.745 CXX test/cpp_headers/histogram_data.o 00:08:57.003 CXX test/cpp_headers/idxd.o 00:08:57.003 CXX test/cpp_headers/idxd_spec.o 00:08:57.003 CXX test/cpp_headers/init.o 00:08:57.003 CXX test/cpp_headers/ioat.o 00:08:57.003 LINK reset 00:08:57.003 LINK dif 00:08:57.003 LINK sgl 00:08:57.003 LINK nvme_dp 00:08:57.260 CXX test/cpp_headers/ioat_spec.o 00:08:57.260 CC test/nvme/overhead/overhead.o 00:08:57.260 CC test/nvme/err_injection/err_injection.o 00:08:57.260 CC test/nvme/startup/startup.o 00:08:57.260 LINK bdevperf 00:08:57.260 CXX test/cpp_headers/iscsi_spec.o 00:08:57.260 CXX test/cpp_headers/json.o 00:08:57.260 CC test/nvme/reserve/reserve.o 00:08:57.517 CC test/nvme/simple_copy/simple_copy.o 00:08:57.517 LINK err_injection 00:08:57.517 LINK startup 00:08:57.517 CXX test/cpp_headers/jsonrpc.o 00:08:57.517 LINK overhead 00:08:57.517 LINK reserve 00:08:57.517 CC test/nvme/connect_stress/connect_stress.o 00:08:57.517 CC test/bdev/bdevio/bdevio.o 00:08:57.774 CXX test/cpp_headers/keyring.o 00:08:57.774 LINK simple_copy 00:08:57.774 CC test/nvme/boot_partition/boot_partition.o 00:08:57.774 CC test/nvme/compliance/nvme_compliance.o 00:08:57.774 CC examples/nvmf/nvmf/nvmf.o 00:08:57.774 LINK connect_stress 00:08:57.774 CC test/nvme/fused_ordering/fused_ordering.o 00:08:57.774 CXX test/cpp_headers/keyring_module.o 00:08:57.774 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:58.031 LINK boot_partition 00:08:58.031 CXX test/cpp_headers/likely.o 00:08:58.031 LINK bdevio 00:08:58.031 CC test/nvme/fdp/fdp.o 00:08:58.031 LINK fused_ordering 00:08:58.031 LINK doorbell_aers 00:08:58.031 CC test/nvme/cuse/cuse.o 00:08:58.031 CXX test/cpp_headers/log.o 00:08:58.031 LINK nvmf 00:08:58.031 LINK nvme_compliance 00:08:58.290 CXX test/cpp_headers/lvol.o 00:08:58.290 CXX test/cpp_headers/md5.o 00:08:58.290 CXX test/cpp_headers/memory.o 00:08:58.290 CXX test/cpp_headers/mmio.o 00:08:58.290 CXX test/cpp_headers/nbd.o 00:08:58.290 CXX test/cpp_headers/net.o 00:08:58.290 CXX test/cpp_headers/notify.o 00:08:58.290 CXX test/cpp_headers/nvme.o 00:08:58.290 CXX test/cpp_headers/nvme_intel.o 00:08:58.290 LINK fdp 00:08:58.290 CXX test/cpp_headers/nvme_ocssd.o 00:08:58.290 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:58.549 CXX test/cpp_headers/nvme_spec.o 00:08:58.549 CXX test/cpp_headers/nvme_zns.o 00:08:58.549 CXX test/cpp_headers/nvmf_cmd.o 00:08:58.549 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:58.549 CXX test/cpp_headers/nvmf.o 00:08:58.549 CXX test/cpp_headers/nvmf_spec.o 00:08:58.549 CXX test/cpp_headers/nvmf_transport.o 00:08:58.549 CXX test/cpp_headers/opal.o 00:08:58.549 CXX test/cpp_headers/opal_spec.o 00:08:58.549 CXX test/cpp_headers/pci_ids.o 00:08:58.807 CXX test/cpp_headers/pipe.o 00:08:58.807 CXX test/cpp_headers/queue.o 00:08:58.807 CXX test/cpp_headers/reduce.o 00:08:58.807 CXX test/cpp_headers/rpc.o 00:08:58.807 CXX test/cpp_headers/scheduler.o 00:08:58.807 CXX test/cpp_headers/scsi.o 00:08:58.807 CXX test/cpp_headers/scsi_spec.o 00:08:58.807 CXX test/cpp_headers/sock.o 00:08:58.807 CXX test/cpp_headers/stdinc.o 00:08:58.807 CXX test/cpp_headers/string.o 00:08:58.807 CXX test/cpp_headers/thread.o 00:08:58.807 CXX test/cpp_headers/trace.o 00:08:58.807 CXX test/cpp_headers/trace_parser.o 00:08:59.066 CXX test/cpp_headers/tree.o 00:08:59.066 CXX test/cpp_headers/ublk.o 00:08:59.066 CXX test/cpp_headers/util.o 00:08:59.066 CXX test/cpp_headers/uuid.o 00:08:59.066 CXX test/cpp_headers/version.o 00:08:59.066 CXX test/cpp_headers/vfio_user_pci.o 00:08:59.066 CXX test/cpp_headers/vfio_user_spec.o 00:08:59.066 CXX test/cpp_headers/vhost.o 00:08:59.066 CXX test/cpp_headers/vmd.o 00:08:59.066 CXX test/cpp_headers/xor.o 00:08:59.066 CXX test/cpp_headers/zipf.o 00:08:59.634 LINK cuse 00:09:02.918 LINK esnap 00:09:03.176 00:09:03.176 real 1m41.798s 00:09:03.176 user 8m39.361s 00:09:03.176 sys 2m5.064s 00:09:03.176 13:28:15 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:09:03.176 ************************************ 00:09:03.176 END TEST make 00:09:03.176 ************************************ 00:09:03.176 13:28:15 make -- common/autotest_common.sh@10 -- $ set +x 00:09:03.176 13:28:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:09:03.176 13:28:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:09:03.176 13:28:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:09:03.176 13:28:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:03.176 13:28:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:09:03.176 13:28:15 -- pm/common@44 -- $ pid=5286 00:09:03.176 13:28:15 -- pm/common@50 -- $ kill -TERM 5286 00:09:03.176 13:28:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:03.176 13:28:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:09:03.176 13:28:15 -- pm/common@44 -- $ pid=5288 00:09:03.176 13:28:15 -- pm/common@50 -- $ kill -TERM 5288 00:09:03.177 13:28:15 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:09:03.177 13:28:15 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:09:03.435 13:28:15 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:03.435 13:28:15 -- common/autotest_common.sh@1693 -- # lcov --version 00:09:03.435 13:28:15 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:03.435 13:28:15 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:03.435 13:28:15 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.435 13:28:15 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.435 13:28:15 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.436 13:28:15 -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.436 13:28:15 -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.436 13:28:15 -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.436 13:28:15 -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.436 13:28:15 -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.436 13:28:15 -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.436 13:28:15 -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.436 13:28:15 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.436 13:28:15 -- scripts/common.sh@344 -- # case "$op" in 00:09:03.436 13:28:15 -- scripts/common.sh@345 -- # : 1 00:09:03.436 13:28:15 -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.436 13:28:15 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.436 13:28:15 -- scripts/common.sh@365 -- # decimal 1 00:09:03.436 13:28:15 -- scripts/common.sh@353 -- # local d=1 00:09:03.436 13:28:15 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.436 13:28:15 -- scripts/common.sh@355 -- # echo 1 00:09:03.436 13:28:15 -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.436 13:28:15 -- scripts/common.sh@366 -- # decimal 2 00:09:03.436 13:28:15 -- scripts/common.sh@353 -- # local d=2 00:09:03.436 13:28:15 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.436 13:28:15 -- scripts/common.sh@355 -- # echo 2 00:09:03.436 13:28:15 -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.436 13:28:15 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.436 13:28:15 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.436 13:28:15 -- scripts/common.sh@368 -- # return 0 00:09:03.436 13:28:15 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.436 13:28:15 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:03.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.436 --rc genhtml_branch_coverage=1 00:09:03.436 --rc genhtml_function_coverage=1 00:09:03.436 --rc genhtml_legend=1 00:09:03.436 --rc geninfo_all_blocks=1 00:09:03.436 --rc geninfo_unexecuted_blocks=1 00:09:03.436 00:09:03.436 ' 00:09:03.436 13:28:15 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:03.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.436 --rc genhtml_branch_coverage=1 00:09:03.436 --rc genhtml_function_coverage=1 00:09:03.436 --rc genhtml_legend=1 00:09:03.436 --rc geninfo_all_blocks=1 00:09:03.436 --rc geninfo_unexecuted_blocks=1 00:09:03.436 00:09:03.436 ' 00:09:03.436 13:28:15 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:03.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.436 --rc genhtml_branch_coverage=1 00:09:03.436 --rc genhtml_function_coverage=1 00:09:03.436 --rc genhtml_legend=1 00:09:03.436 --rc geninfo_all_blocks=1 00:09:03.436 --rc geninfo_unexecuted_blocks=1 00:09:03.436 00:09:03.436 ' 00:09:03.436 13:28:15 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:03.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.436 --rc genhtml_branch_coverage=1 00:09:03.436 --rc genhtml_function_coverage=1 00:09:03.436 --rc genhtml_legend=1 00:09:03.436 --rc geninfo_all_blocks=1 00:09:03.436 --rc geninfo_unexecuted_blocks=1 00:09:03.436 00:09:03.436 ' 00:09:03.436 13:28:15 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:03.436 13:28:15 -- nvmf/common.sh@7 -- # uname -s 00:09:03.436 13:28:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.436 13:28:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.436 13:28:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.436 13:28:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.436 13:28:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.436 13:28:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.436 13:28:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.436 13:28:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.436 13:28:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.436 13:28:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.436 13:28:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9631dc76-024e-47d8-ab58-2f4e4cd41f29 00:09:03.436 13:28:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=9631dc76-024e-47d8-ab58-2f4e4cd41f29 00:09:03.436 13:28:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.436 13:28:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.436 13:28:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:03.436 13:28:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.436 13:28:15 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:03.436 13:28:15 -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.696 13:28:15 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.696 13:28:15 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.696 13:28:15 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.696 13:28:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.696 13:28:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.696 13:28:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.696 13:28:15 -- paths/export.sh@5 -- # export PATH 00:09:03.696 13:28:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.696 13:28:15 -- nvmf/common.sh@51 -- # : 0 00:09:03.696 13:28:15 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:03.696 13:28:15 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:03.696 13:28:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.696 13:28:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.696 13:28:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.696 13:28:15 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:03.696 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:03.696 13:28:15 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:03.696 13:28:15 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:03.696 13:28:15 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:03.696 13:28:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:09:03.696 13:28:15 -- spdk/autotest.sh@32 -- # uname -s 00:09:03.696 13:28:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:09:03.696 13:28:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:09:03.696 13:28:15 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:09:03.696 13:28:15 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:09:03.696 13:28:15 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:09:03.696 13:28:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:09:03.696 13:28:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:09:03.696 13:28:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:09:03.696 13:28:15 -- spdk/autotest.sh@48 -- # udevadm_pid=54966 00:09:03.696 13:28:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:09:03.696 13:28:15 -- pm/common@17 -- # local monitor 00:09:03.696 13:28:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:03.696 13:28:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:03.696 13:28:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:09:03.696 13:28:15 -- pm/common@25 -- # sleep 1 00:09:03.696 13:28:15 -- pm/common@21 -- # date +%s 00:09:03.696 13:28:15 -- pm/common@21 -- # date +%s 00:09:03.696 13:28:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732109295 00:09:03.696 13:28:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732109295 00:09:03.696 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732109295_collect-cpu-load.pm.log 00:09:03.696 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732109295_collect-vmstat.pm.log 00:09:04.633 13:28:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:09:04.633 13:28:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:09:04.633 13:28:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:04.633 13:28:16 -- common/autotest_common.sh@10 -- # set +x 00:09:04.633 13:28:16 -- spdk/autotest.sh@59 -- # create_test_list 00:09:04.633 13:28:16 -- common/autotest_common.sh@752 -- # xtrace_disable 00:09:04.633 13:28:16 -- common/autotest_common.sh@10 -- # set +x 00:09:04.633 13:28:16 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:09:04.633 13:28:16 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:09:04.633 13:28:16 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:09:04.633 13:28:16 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:09:04.633 13:28:16 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:09:04.633 13:28:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:09:04.633 13:28:16 -- common/autotest_common.sh@1457 -- # uname 00:09:04.633 13:28:16 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:09:04.633 13:28:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:09:04.633 13:28:16 -- common/autotest_common.sh@1477 -- # uname 00:09:04.633 13:28:16 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:09:04.633 13:28:16 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:09:04.633 13:28:16 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:09:04.892 lcov: LCOV version 1.15 00:09:04.892 13:28:16 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:09:19.768 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:09:19.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:09:37.908 13:28:47 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:09:37.908 13:28:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:37.908 13:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:37.908 13:28:47 -- spdk/autotest.sh@78 -- # rm -f 00:09:37.908 13:28:47 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:37.908 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.908 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:09:37.908 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:09:37.909 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:09:37.909 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:09:37.909 13:28:48 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:09:37.909 13:28:48 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:09:37.909 13:28:48 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:09:37.909 13:28:48 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:09:37.909 13:28:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:37.909 13:28:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:09:37.909 13:28:48 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:09:37.909 13:28:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:37.909 13:28:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:37.909 13:28:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:37.909 13:28:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:09:37.909 13:28:48 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:09:37.909 13:28:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:37.909 13:28:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:37.909 13:28:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:37.909 13:28:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:09:37.909 13:28:48 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:09:37.909 13:28:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:37.909 13:28:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:37.909 13:28:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:37.909 13:28:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:09:37.909 13:28:48 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:09:37.909 13:28:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:37.909 13:28:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:37.909 13:28:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:37.909 13:28:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:09:37.909 13:28:48 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:09:37.909 13:28:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:37.909 13:28:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:37.909 13:28:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:37.909 13:28:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:09:37.909 13:28:48 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:09:37.909 13:28:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:37.909 13:28:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:37.909 13:28:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:37.909 13:28:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:09:37.909 13:28:48 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:09:37.909 13:28:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:37.909 13:28:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:37.909 13:28:48 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:09:37.909 13:28:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:37.909 13:28:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:37.909 13:28:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:09:37.909 13:28:48 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:09:37.909 13:28:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:37.909 No valid GPT data, bailing 00:09:37.909 13:28:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:37.909 13:28:48 -- scripts/common.sh@394 -- # pt= 00:09:37.909 13:28:48 -- scripts/common.sh@395 -- # return 1 00:09:37.909 13:28:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:37.909 1+0 records in 00:09:37.909 1+0 records out 00:09:37.909 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158446 s, 66.2 MB/s 00:09:37.909 13:28:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:37.909 13:28:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:37.909 13:28:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:09:37.909 13:28:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:09:37.909 13:28:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:09:37.909 No valid GPT data, bailing 00:09:37.909 13:28:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:09:37.909 13:28:48 -- scripts/common.sh@394 -- # pt= 00:09:37.909 13:28:48 -- scripts/common.sh@395 -- # return 1 00:09:37.909 13:28:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:09:37.909 1+0 records in 00:09:37.909 1+0 records out 00:09:37.909 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0060507 s, 173 MB/s 00:09:37.909 13:28:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:37.909 13:28:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:37.909 13:28:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:09:37.909 13:28:48 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:09:37.909 13:28:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:09:37.909 No valid GPT data, bailing 00:09:37.909 13:28:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:09:37.909 13:28:48 -- scripts/common.sh@394 -- # pt= 00:09:37.909 13:28:48 -- scripts/common.sh@395 -- # return 1 00:09:37.909 13:28:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:09:37.909 1+0 records in 00:09:37.909 1+0 records out 00:09:37.909 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00596953 s, 176 MB/s 00:09:37.909 13:28:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:37.909 13:28:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:37.909 13:28:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:09:37.909 13:28:48 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:09:37.909 13:28:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:09:37.909 No valid GPT data, bailing 00:09:37.909 13:28:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:09:37.909 13:28:48 -- scripts/common.sh@394 -- # pt= 00:09:37.909 13:28:48 -- scripts/common.sh@395 -- # return 1 00:09:37.909 13:28:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:09:37.909 1+0 records in 00:09:37.909 1+0 records out 00:09:37.909 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00591844 s, 177 MB/s 00:09:37.909 13:28:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:37.909 13:28:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:37.909 13:28:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:09:37.909 13:28:48 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:09:37.909 13:28:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:09:37.909 No valid GPT data, bailing 00:09:37.909 13:28:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:09:37.909 13:28:48 -- scripts/common.sh@394 -- # pt= 00:09:37.909 13:28:48 -- scripts/common.sh@395 -- # return 1 00:09:37.909 13:28:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:09:37.909 1+0 records in 00:09:37.909 1+0 records out 00:09:37.909 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00594133 s, 176 MB/s 00:09:37.909 13:28:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:37.909 13:28:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:37.909 13:28:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:09:37.909 13:28:48 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:09:37.909 13:28:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:09:37.909 No valid GPT data, bailing 00:09:37.909 13:28:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:09:37.909 13:28:48 -- scripts/common.sh@394 -- # pt= 00:09:37.909 13:28:48 -- scripts/common.sh@395 -- # return 1 00:09:37.909 13:28:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:09:37.909 1+0 records in 00:09:37.909 1+0 records out 00:09:37.909 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00674545 s, 155 MB/s 00:09:37.909 13:28:48 -- spdk/autotest.sh@105 -- # sync 00:09:37.909 13:28:49 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:37.909 13:28:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:37.909 13:28:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:40.445 13:28:52 -- spdk/autotest.sh@111 -- # uname -s 00:09:40.445 13:28:52 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:09:40.445 13:28:52 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:09:40.445 13:28:52 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:41.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:41.580 Hugepages 00:09:41.580 node hugesize free / total 00:09:41.580 node0 1048576kB 0 / 0 00:09:41.580 node0 2048kB 0 / 0 00:09:41.580 00:09:41.580 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:41.840 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:09:41.840 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:09:42.099 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:09:42.099 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:09:42.358 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:09:42.358 13:28:54 -- spdk/autotest.sh@117 -- # uname -s 00:09:42.358 13:28:54 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:09:42.358 13:28:54 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:09:42.358 13:28:54 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:42.927 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:43.863 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.863 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.863 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.863 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.863 13:28:55 -- common/autotest_common.sh@1517 -- # sleep 1 00:09:45.240 13:28:56 -- common/autotest_common.sh@1518 -- # bdfs=() 00:09:45.240 13:28:56 -- common/autotest_common.sh@1518 -- # local bdfs 00:09:45.240 13:28:56 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:09:45.240 13:28:56 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:09:45.240 13:28:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:45.240 13:28:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:45.240 13:28:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:45.240 13:28:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:45.240 13:28:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:45.240 13:28:56 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:45.240 13:28:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:45.240 13:28:56 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:45.809 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:46.070 Waiting for block devices as requested 00:09:46.070 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:46.070 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:46.331 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:46.331 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.601 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:51.601 13:29:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:51.601 13:29:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:09:51.601 13:29:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:51.601 13:29:03 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:09:51.601 13:29:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:51.601 13:29:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:09:51.601 13:29:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:51.601 13:29:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:09:51.601 13:29:03 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:09:51.601 13:29:03 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:09:51.601 13:29:03 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:09:51.601 13:29:03 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:51.601 13:29:03 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:51.601 13:29:03 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:51.601 13:29:03 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:51.601 13:29:03 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:51.601 13:29:03 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:51.601 13:29:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:09:51.601 13:29:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:51.601 13:29:03 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:51.601 13:29:03 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:51.601 13:29:03 -- common/autotest_common.sh@1543 -- # continue 00:09:51.601 13:29:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:51.601 13:29:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:09:51.601 13:29:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:51.601 13:29:03 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:09:51.601 13:29:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:51.601 13:29:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:09:51.601 13:29:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:51.601 13:29:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:09:51.601 13:29:03 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:09:51.601 13:29:03 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:09:51.601 13:29:03 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:09:51.601 13:29:03 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:51.601 13:29:03 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:51.601 13:29:03 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:51.601 13:29:03 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:51.601 13:29:03 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:51.601 13:29:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:09:51.601 13:29:03 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:51.601 13:29:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:51.601 13:29:03 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:51.601 13:29:03 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:51.601 13:29:03 -- common/autotest_common.sh@1543 -- # continue 00:09:51.601 13:29:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:51.601 13:29:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:09:51.601 13:29:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:51.601 13:29:03 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:09:51.601 13:29:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:09:51.601 13:29:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:09:51.601 13:29:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:09:51.601 13:29:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:09:51.601 13:29:03 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:09:51.601 13:29:03 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:09:51.601 13:29:03 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:09:51.602 13:29:03 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:51.602 13:29:03 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:51.602 13:29:03 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:51.602 13:29:03 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:51.602 13:29:03 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:51.602 13:29:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:09:51.602 13:29:03 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:51.602 13:29:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:51.602 13:29:03 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:51.602 13:29:03 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:51.602 13:29:03 -- common/autotest_common.sh@1543 -- # continue 00:09:51.602 13:29:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:51.602 13:29:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:09:51.602 13:29:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:09:51.602 13:29:03 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:09:51.602 13:29:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:09:51.602 13:29:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:09:51.602 13:29:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:09:51.602 13:29:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:09:51.602 13:29:03 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:09:51.602 13:29:03 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:09:51.602 13:29:03 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:09:51.602 13:29:03 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:51.602 13:29:03 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:51.602 13:29:03 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:51.602 13:29:03 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:51.602 13:29:03 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:51.602 13:29:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:09:51.602 13:29:03 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:51.602 13:29:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:51.602 13:29:03 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:51.602 13:29:03 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:51.602 13:29:03 -- common/autotest_common.sh@1543 -- # continue 00:09:51.602 13:29:03 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:09:51.602 13:29:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:51.602 13:29:03 -- common/autotest_common.sh@10 -- # set +x 00:09:51.860 13:29:03 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:09:51.860 13:29:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:51.860 13:29:03 -- common/autotest_common.sh@10 -- # set +x 00:09:51.860 13:29:03 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:52.427 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:53.377 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.377 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.377 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.377 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.377 13:29:05 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:09:53.377 13:29:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:53.377 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:09:53.377 13:29:05 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:09:53.377 13:29:05 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:09:53.377 13:29:05 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:09:53.377 13:29:05 -- common/autotest_common.sh@1563 -- # bdfs=() 00:09:53.377 13:29:05 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:09:53.377 13:29:05 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:09:53.377 13:29:05 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:09:53.377 13:29:05 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:09:53.377 13:29:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:53.377 13:29:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:53.377 13:29:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:53.377 13:29:05 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:53.377 13:29:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:53.636 13:29:05 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:53.636 13:29:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:53.636 13:29:05 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:53.636 13:29:05 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:09:53.637 13:29:05 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:53.637 13:29:05 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:53.637 13:29:05 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:53.637 13:29:05 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:09:53.637 13:29:05 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:53.637 13:29:05 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:53.637 13:29:05 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:53.637 13:29:05 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:09:53.637 13:29:05 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:53.637 13:29:05 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:53.637 13:29:05 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:53.637 13:29:05 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:09:53.637 13:29:05 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:53.637 13:29:05 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:53.637 13:29:05 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:09:53.637 13:29:05 -- common/autotest_common.sh@1572 -- # return 0 00:09:53.637 13:29:05 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:09:53.637 13:29:05 -- common/autotest_common.sh@1580 -- # return 0 00:09:53.637 13:29:05 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:09:53.637 13:29:05 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:09:53.637 13:29:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:53.637 13:29:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:53.637 13:29:05 -- spdk/autotest.sh@149 -- # timing_enter lib 00:09:53.637 13:29:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:53.637 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:09:53.637 13:29:05 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:09:53.637 13:29:05 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:53.637 13:29:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:53.637 13:29:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.637 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:09:53.637 ************************************ 00:09:53.637 START TEST env 00:09:53.637 ************************************ 00:09:53.637 13:29:05 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:53.896 * Looking for test storage... 00:09:53.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:53.896 13:29:05 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:53.896 13:29:05 env -- common/autotest_common.sh@1693 -- # lcov --version 00:09:53.896 13:29:05 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:53.896 13:29:05 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:53.896 13:29:05 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.896 13:29:05 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.896 13:29:05 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.896 13:29:05 env -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.896 13:29:05 env -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.896 13:29:05 env -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.896 13:29:05 env -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.896 13:29:05 env -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.896 13:29:05 env -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.896 13:29:05 env -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.896 13:29:05 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.896 13:29:05 env -- scripts/common.sh@344 -- # case "$op" in 00:09:53.896 13:29:05 env -- scripts/common.sh@345 -- # : 1 00:09:53.896 13:29:05 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.896 13:29:05 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.896 13:29:05 env -- scripts/common.sh@365 -- # decimal 1 00:09:53.896 13:29:05 env -- scripts/common.sh@353 -- # local d=1 00:09:53.896 13:29:05 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.896 13:29:05 env -- scripts/common.sh@355 -- # echo 1 00:09:53.896 13:29:05 env -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.896 13:29:05 env -- scripts/common.sh@366 -- # decimal 2 00:09:53.896 13:29:05 env -- scripts/common.sh@353 -- # local d=2 00:09:53.896 13:29:05 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.896 13:29:05 env -- scripts/common.sh@355 -- # echo 2 00:09:53.896 13:29:05 env -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.896 13:29:05 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.896 13:29:05 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.896 13:29:05 env -- scripts/common.sh@368 -- # return 0 00:09:53.896 13:29:05 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.896 13:29:05 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:53.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.896 --rc genhtml_branch_coverage=1 00:09:53.896 --rc genhtml_function_coverage=1 00:09:53.896 --rc genhtml_legend=1 00:09:53.896 --rc geninfo_all_blocks=1 00:09:53.896 --rc geninfo_unexecuted_blocks=1 00:09:53.896 00:09:53.896 ' 00:09:53.896 13:29:05 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:53.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.896 --rc genhtml_branch_coverage=1 00:09:53.896 --rc genhtml_function_coverage=1 00:09:53.896 --rc genhtml_legend=1 00:09:53.896 --rc geninfo_all_blocks=1 00:09:53.896 --rc geninfo_unexecuted_blocks=1 00:09:53.896 00:09:53.896 ' 00:09:53.896 13:29:05 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:53.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.896 --rc genhtml_branch_coverage=1 00:09:53.896 --rc genhtml_function_coverage=1 00:09:53.896 --rc genhtml_legend=1 00:09:53.896 --rc geninfo_all_blocks=1 00:09:53.896 --rc geninfo_unexecuted_blocks=1 00:09:53.897 00:09:53.897 ' 00:09:53.897 13:29:05 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:53.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.897 --rc genhtml_branch_coverage=1 00:09:53.897 --rc genhtml_function_coverage=1 00:09:53.897 --rc genhtml_legend=1 00:09:53.897 --rc geninfo_all_blocks=1 00:09:53.897 --rc geninfo_unexecuted_blocks=1 00:09:53.897 00:09:53.897 ' 00:09:53.897 13:29:05 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:53.897 13:29:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:53.897 13:29:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.897 13:29:05 env -- common/autotest_common.sh@10 -- # set +x 00:09:53.897 ************************************ 00:09:53.897 START TEST env_memory 00:09:53.897 ************************************ 00:09:53.897 13:29:05 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:53.897 00:09:53.897 00:09:53.897 CUnit - A unit testing framework for C - Version 2.1-3 00:09:53.897 http://cunit.sourceforge.net/ 00:09:53.897 00:09:53.897 00:09:53.897 Suite: memory 00:09:53.897 Test: alloc and free memory map ...[2024-11-20 13:29:05.821533] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:54.156 passed 00:09:54.156 Test: mem map translation ...[2024-11-20 13:29:05.869640] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:54.156 [2024-11-20 13:29:05.869831] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:54.156 [2024-11-20 13:29:05.870088] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:54.156 [2024-11-20 13:29:05.870240] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:54.156 passed 00:09:54.156 Test: mem map registration ...[2024-11-20 13:29:05.938441] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:09:54.156 [2024-11-20 13:29:05.938638] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:09:54.156 passed 00:09:54.156 Test: mem map adjacent registrations ...passed 00:09:54.156 00:09:54.156 Run Summary: Type Total Ran Passed Failed Inactive 00:09:54.156 suites 1 1 n/a 0 0 00:09:54.156 tests 4 4 4 0 0 00:09:54.156 asserts 152 152 152 0 n/a 00:09:54.156 00:09:54.156 Elapsed time = 0.250 seconds 00:09:54.156 00:09:54.156 real 0m0.314s 00:09:54.156 user 0m0.266s 00:09:54.156 sys 0m0.035s 00:09:54.156 13:29:06 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.156 13:29:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:54.156 ************************************ 00:09:54.156 END TEST env_memory 00:09:54.156 ************************************ 00:09:54.415 13:29:06 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:54.415 13:29:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:54.415 13:29:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.415 13:29:06 env -- common/autotest_common.sh@10 -- # set +x 00:09:54.415 ************************************ 00:09:54.415 START TEST env_vtophys 00:09:54.415 ************************************ 00:09:54.415 13:29:06 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:54.415 EAL: lib.eal log level changed from notice to debug 00:09:54.415 EAL: Detected lcore 0 as core 0 on socket 0 00:09:54.415 EAL: Detected lcore 1 as core 0 on socket 0 00:09:54.415 EAL: Detected lcore 2 as core 0 on socket 0 00:09:54.415 EAL: Detected lcore 3 as core 0 on socket 0 00:09:54.415 EAL: Detected lcore 4 as core 0 on socket 0 00:09:54.415 EAL: Detected lcore 5 as core 0 on socket 0 00:09:54.415 EAL: Detected lcore 6 as core 0 on socket 0 00:09:54.415 EAL: Detected lcore 7 as core 0 on socket 0 00:09:54.415 EAL: Detected lcore 8 as core 0 on socket 0 00:09:54.415 EAL: Detected lcore 9 as core 0 on socket 0 00:09:54.415 EAL: Maximum logical cores by configuration: 128 00:09:54.415 EAL: Detected CPU lcores: 10 00:09:54.415 EAL: Detected NUMA nodes: 1 00:09:54.415 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:54.415 EAL: Detected shared linkage of DPDK 00:09:54.415 EAL: No shared files mode enabled, IPC will be disabled 00:09:54.415 EAL: Selected IOVA mode 'PA' 00:09:54.415 EAL: Probing VFIO support... 00:09:54.415 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:54.415 EAL: VFIO modules not loaded, skipping VFIO support... 00:09:54.415 EAL: Ask a virtual area of 0x2e000 bytes 00:09:54.415 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:54.415 EAL: Setting up physically contiguous memory... 00:09:54.415 EAL: Setting maximum number of open files to 524288 00:09:54.415 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:54.415 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:54.415 EAL: Ask a virtual area of 0x61000 bytes 00:09:54.415 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:54.415 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:54.415 EAL: Ask a virtual area of 0x400000000 bytes 00:09:54.415 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:54.415 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:54.415 EAL: Ask a virtual area of 0x61000 bytes 00:09:54.415 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:54.415 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:54.415 EAL: Ask a virtual area of 0x400000000 bytes 00:09:54.415 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:54.415 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:54.415 EAL: Ask a virtual area of 0x61000 bytes 00:09:54.415 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:54.415 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:54.415 EAL: Ask a virtual area of 0x400000000 bytes 00:09:54.415 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:54.415 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:54.415 EAL: Ask a virtual area of 0x61000 bytes 00:09:54.415 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:54.415 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:54.415 EAL: Ask a virtual area of 0x400000000 bytes 00:09:54.415 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:54.415 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:54.415 EAL: Hugepages will be freed exactly as allocated. 00:09:54.415 EAL: No shared files mode enabled, IPC is disabled 00:09:54.415 EAL: No shared files mode enabled, IPC is disabled 00:09:54.415 EAL: TSC frequency is ~2490000 KHz 00:09:54.415 EAL: Main lcore 0 is ready (tid=7f80c88a1a40;cpuset=[0]) 00:09:54.415 EAL: Trying to obtain current memory policy. 00:09:54.415 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:54.415 EAL: Restoring previous memory policy: 0 00:09:54.415 EAL: request: mp_malloc_sync 00:09:54.415 EAL: No shared files mode enabled, IPC is disabled 00:09:54.415 EAL: Heap on socket 0 was expanded by 2MB 00:09:54.415 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:54.415 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:54.415 EAL: Mem event callback 'spdk:(nil)' registered 00:09:54.415 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:09:54.674 00:09:54.674 00:09:54.674 CUnit - A unit testing framework for C - Version 2.1-3 00:09:54.674 http://cunit.sourceforge.net/ 00:09:54.674 00:09:54.674 00:09:54.674 Suite: components_suite 00:09:54.933 Test: vtophys_malloc_test ...passed 00:09:54.933 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:54.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:54.933 EAL: Restoring previous memory policy: 4 00:09:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:09:54.933 EAL: request: mp_malloc_sync 00:09:54.933 EAL: No shared files mode enabled, IPC is disabled 00:09:54.933 EAL: Heap on socket 0 was expanded by 4MB 00:09:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:09:54.933 EAL: request: mp_malloc_sync 00:09:54.933 EAL: No shared files mode enabled, IPC is disabled 00:09:54.933 EAL: Heap on socket 0 was shrunk by 4MB 00:09:54.933 EAL: Trying to obtain current memory policy. 00:09:54.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:54.933 EAL: Restoring previous memory policy: 4 00:09:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:09:54.933 EAL: request: mp_malloc_sync 00:09:54.933 EAL: No shared files mode enabled, IPC is disabled 00:09:54.933 EAL: Heap on socket 0 was expanded by 6MB 00:09:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:09:54.933 EAL: request: mp_malloc_sync 00:09:54.933 EAL: No shared files mode enabled, IPC is disabled 00:09:54.933 EAL: Heap on socket 0 was shrunk by 6MB 00:09:54.933 EAL: Trying to obtain current memory policy. 00:09:54.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:54.933 EAL: Restoring previous memory policy: 4 00:09:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:09:54.933 EAL: request: mp_malloc_sync 00:09:54.933 EAL: No shared files mode enabled, IPC is disabled 00:09:54.933 EAL: Heap on socket 0 was expanded by 10MB 00:09:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:09:54.933 EAL: request: mp_malloc_sync 00:09:54.933 EAL: No shared files mode enabled, IPC is disabled 00:09:54.933 EAL: Heap on socket 0 was shrunk by 10MB 00:09:54.933 EAL: Trying to obtain current memory policy. 00:09:54.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:54.933 EAL: Restoring previous memory policy: 4 00:09:54.933 EAL: Calling mem event callback 'spdk:(nil)' 00:09:54.933 EAL: request: mp_malloc_sync 00:09:54.933 EAL: No shared files mode enabled, IPC is disabled 00:09:54.933 EAL: Heap on socket 0 was expanded by 18MB 00:09:55.192 EAL: Calling mem event callback 'spdk:(nil)' 00:09:55.192 EAL: request: mp_malloc_sync 00:09:55.192 EAL: No shared files mode enabled, IPC is disabled 00:09:55.192 EAL: Heap on socket 0 was shrunk by 18MB 00:09:55.192 EAL: Trying to obtain current memory policy. 00:09:55.192 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:55.192 EAL: Restoring previous memory policy: 4 00:09:55.192 EAL: Calling mem event callback 'spdk:(nil)' 00:09:55.192 EAL: request: mp_malloc_sync 00:09:55.192 EAL: No shared files mode enabled, IPC is disabled 00:09:55.192 EAL: Heap on socket 0 was expanded by 34MB 00:09:55.192 EAL: Calling mem event callback 'spdk:(nil)' 00:09:55.192 EAL: request: mp_malloc_sync 00:09:55.192 EAL: No shared files mode enabled, IPC is disabled 00:09:55.192 EAL: Heap on socket 0 was shrunk by 34MB 00:09:55.192 EAL: Trying to obtain current memory policy. 00:09:55.192 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:55.192 EAL: Restoring previous memory policy: 4 00:09:55.192 EAL: Calling mem event callback 'spdk:(nil)' 00:09:55.192 EAL: request: mp_malloc_sync 00:09:55.192 EAL: No shared files mode enabled, IPC is disabled 00:09:55.192 EAL: Heap on socket 0 was expanded by 66MB 00:09:55.449 EAL: Calling mem event callback 'spdk:(nil)' 00:09:55.449 EAL: request: mp_malloc_sync 00:09:55.449 EAL: No shared files mode enabled, IPC is disabled 00:09:55.449 EAL: Heap on socket 0 was shrunk by 66MB 00:09:55.449 EAL: Trying to obtain current memory policy. 00:09:55.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:55.449 EAL: Restoring previous memory policy: 4 00:09:55.449 EAL: Calling mem event callback 'spdk:(nil)' 00:09:55.449 EAL: request: mp_malloc_sync 00:09:55.449 EAL: No shared files mode enabled, IPC is disabled 00:09:55.449 EAL: Heap on socket 0 was expanded by 130MB 00:09:55.707 EAL: Calling mem event callback 'spdk:(nil)' 00:09:55.707 EAL: request: mp_malloc_sync 00:09:55.707 EAL: No shared files mode enabled, IPC is disabled 00:09:55.707 EAL: Heap on socket 0 was shrunk by 130MB 00:09:55.966 EAL: Trying to obtain current memory policy. 00:09:55.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:55.966 EAL: Restoring previous memory policy: 4 00:09:55.966 EAL: Calling mem event callback 'spdk:(nil)' 00:09:55.966 EAL: request: mp_malloc_sync 00:09:55.966 EAL: No shared files mode enabled, IPC is disabled 00:09:55.966 EAL: Heap on socket 0 was expanded by 258MB 00:09:56.533 EAL: Calling mem event callback 'spdk:(nil)' 00:09:56.533 EAL: request: mp_malloc_sync 00:09:56.533 EAL: No shared files mode enabled, IPC is disabled 00:09:56.533 EAL: Heap on socket 0 was shrunk by 258MB 00:09:57.100 EAL: Trying to obtain current memory policy. 00:09:57.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:57.100 EAL: Restoring previous memory policy: 4 00:09:57.100 EAL: Calling mem event callback 'spdk:(nil)' 00:09:57.100 EAL: request: mp_malloc_sync 00:09:57.100 EAL: No shared files mode enabled, IPC is disabled 00:09:57.100 EAL: Heap on socket 0 was expanded by 514MB 00:09:58.036 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.036 EAL: request: mp_malloc_sync 00:09:58.036 EAL: No shared files mode enabled, IPC is disabled 00:09:58.036 EAL: Heap on socket 0 was shrunk by 514MB 00:09:58.994 EAL: Trying to obtain current memory policy. 00:09:58.994 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:58.994 EAL: Restoring previous memory policy: 4 00:09:58.994 EAL: Calling mem event callback 'spdk:(nil)' 00:09:58.994 EAL: request: mp_malloc_sync 00:09:58.994 EAL: No shared files mode enabled, IPC is disabled 00:09:58.994 EAL: Heap on socket 0 was expanded by 1026MB 00:10:00.897 EAL: Calling mem event callback 'spdk:(nil)' 00:10:01.156 EAL: request: mp_malloc_sync 00:10:01.156 EAL: No shared files mode enabled, IPC is disabled 00:10:01.156 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:03.068 passed 00:10:03.068 00:10:03.068 Run Summary: Type Total Ran Passed Failed Inactive 00:10:03.068 suites 1 1 n/a 0 0 00:10:03.068 tests 2 2 2 0 0 00:10:03.068 asserts 5817 5817 5817 0 n/a 00:10:03.068 00:10:03.068 Elapsed time = 8.208 seconds 00:10:03.068 EAL: Calling mem event callback 'spdk:(nil)' 00:10:03.068 EAL: request: mp_malloc_sync 00:10:03.068 EAL: No shared files mode enabled, IPC is disabled 00:10:03.068 EAL: Heap on socket 0 was shrunk by 2MB 00:10:03.068 EAL: No shared files mode enabled, IPC is disabled 00:10:03.068 EAL: No shared files mode enabled, IPC is disabled 00:10:03.068 EAL: No shared files mode enabled, IPC is disabled 00:10:03.068 00:10:03.068 real 0m8.551s 00:10:03.068 user 0m7.500s 00:10:03.068 sys 0m0.884s 00:10:03.068 13:29:14 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.068 ************************************ 00:10:03.068 END TEST env_vtophys 00:10:03.068 ************************************ 00:10:03.068 13:29:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:10:03.068 13:29:14 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:03.068 13:29:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:03.068 13:29:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.068 13:29:14 env -- common/autotest_common.sh@10 -- # set +x 00:10:03.068 ************************************ 00:10:03.068 START TEST env_pci 00:10:03.068 ************************************ 00:10:03.068 13:29:14 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:03.068 00:10:03.068 00:10:03.068 CUnit - A unit testing framework for C - Version 2.1-3 00:10:03.068 http://cunit.sourceforge.net/ 00:10:03.068 00:10:03.068 00:10:03.068 Suite: pci 00:10:03.068 Test: pci_hook ...[2024-11-20 13:29:14.808181] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57828 has claimed it 00:10:03.068 passed 00:10:03.068 00:10:03.068 Run Summary: Type Total Ran Passed Failed Inactive 00:10:03.068 suites 1 1 n/a 0 0 00:10:03.068 tests 1 1 1 0 0 00:10:03.068 asserts 25 25 25 0 n/a 00:10:03.068 00:10:03.068 Elapsed time = 0.008 seconds 00:10:03.068 EAL: Cannot find device (10000:00:01.0) 00:10:03.068 EAL: Failed to attach device on primary process 00:10:03.068 ************************************ 00:10:03.068 END TEST env_pci 00:10:03.068 ************************************ 00:10:03.068 00:10:03.068 real 0m0.114s 00:10:03.068 user 0m0.055s 00:10:03.068 sys 0m0.058s 00:10:03.068 13:29:14 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.068 13:29:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:10:03.068 13:29:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:10:03.068 13:29:14 env -- env/env.sh@15 -- # uname 00:10:03.068 13:29:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:10:03.068 13:29:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:10:03.068 13:29:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:03.068 13:29:14 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:03.068 13:29:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.068 13:29:14 env -- common/autotest_common.sh@10 -- # set +x 00:10:03.068 ************************************ 00:10:03.068 START TEST env_dpdk_post_init 00:10:03.068 ************************************ 00:10:03.068 13:29:14 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:03.068 EAL: Detected CPU lcores: 10 00:10:03.068 EAL: Detected NUMA nodes: 1 00:10:03.068 EAL: Detected shared linkage of DPDK 00:10:03.327 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:03.327 EAL: Selected IOVA mode 'PA' 00:10:03.327 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:03.327 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:10:03.327 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:10:03.327 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:10:03.327 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:10:03.327 Starting DPDK initialization... 00:10:03.327 Starting SPDK post initialization... 00:10:03.327 SPDK NVMe probe 00:10:03.327 Attaching to 0000:00:10.0 00:10:03.327 Attaching to 0000:00:11.0 00:10:03.327 Attaching to 0000:00:12.0 00:10:03.327 Attaching to 0000:00:13.0 00:10:03.327 Attached to 0000:00:10.0 00:10:03.327 Attached to 0000:00:11.0 00:10:03.327 Attached to 0000:00:13.0 00:10:03.327 Attached to 0000:00:12.0 00:10:03.327 Cleaning up... 00:10:03.327 00:10:03.327 real 0m0.322s 00:10:03.327 user 0m0.117s 00:10:03.327 sys 0m0.107s 00:10:03.327 ************************************ 00:10:03.327 END TEST env_dpdk_post_init 00:10:03.327 ************************************ 00:10:03.327 13:29:15 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.327 13:29:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:10:03.585 13:29:15 env -- env/env.sh@26 -- # uname 00:10:03.585 13:29:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:03.585 13:29:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:03.585 13:29:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:03.585 13:29:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.585 13:29:15 env -- common/autotest_common.sh@10 -- # set +x 00:10:03.585 ************************************ 00:10:03.585 START TEST env_mem_callbacks 00:10:03.585 ************************************ 00:10:03.585 13:29:15 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:03.585 EAL: Detected CPU lcores: 10 00:10:03.585 EAL: Detected NUMA nodes: 1 00:10:03.585 EAL: Detected shared linkage of DPDK 00:10:03.585 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:03.585 EAL: Selected IOVA mode 'PA' 00:10:03.585 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:03.585 00:10:03.585 00:10:03.585 CUnit - A unit testing framework for C - Version 2.1-3 00:10:03.585 http://cunit.sourceforge.net/ 00:10:03.585 00:10:03.585 00:10:03.585 Suite: memory 00:10:03.585 Test: test ... 00:10:03.585 register 0x200000200000 2097152 00:10:03.585 malloc 3145728 00:10:03.585 register 0x200000400000 4194304 00:10:03.586 buf 0x2000004fffc0 len 3145728 PASSED 00:10:03.586 malloc 64 00:10:03.586 buf 0x2000004ffec0 len 64 PASSED 00:10:03.586 malloc 4194304 00:10:03.586 register 0x200000800000 6291456 00:10:03.844 buf 0x2000009fffc0 len 4194304 PASSED 00:10:03.844 free 0x2000004fffc0 3145728 00:10:03.844 free 0x2000004ffec0 64 00:10:03.844 unregister 0x200000400000 4194304 PASSED 00:10:03.844 free 0x2000009fffc0 4194304 00:10:03.844 unregister 0x200000800000 6291456 PASSED 00:10:03.844 malloc 8388608 00:10:03.844 register 0x200000400000 10485760 00:10:03.844 buf 0x2000005fffc0 len 8388608 PASSED 00:10:03.844 free 0x2000005fffc0 8388608 00:10:03.844 unregister 0x200000400000 10485760 PASSED 00:10:03.844 passed 00:10:03.844 00:10:03.844 Run Summary: Type Total Ran Passed Failed Inactive 00:10:03.844 suites 1 1 n/a 0 0 00:10:03.844 tests 1 1 1 0 0 00:10:03.844 asserts 15 15 15 0 n/a 00:10:03.844 00:10:03.844 Elapsed time = 0.063 seconds 00:10:03.844 00:10:03.844 real 0m0.272s 00:10:03.844 user 0m0.091s 00:10:03.844 sys 0m0.076s 00:10:03.844 ************************************ 00:10:03.844 END TEST env_mem_callbacks 00:10:03.844 ************************************ 00:10:03.844 13:29:15 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.844 13:29:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:10:03.844 ************************************ 00:10:03.844 END TEST env 00:10:03.844 ************************************ 00:10:03.844 00:10:03.844 real 0m10.202s 00:10:03.844 user 0m8.288s 00:10:03.844 sys 0m1.522s 00:10:03.844 13:29:15 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.844 13:29:15 env -- common/autotest_common.sh@10 -- # set +x 00:10:03.844 13:29:15 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:03.844 13:29:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:03.844 13:29:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.844 13:29:15 -- common/autotest_common.sh@10 -- # set +x 00:10:03.844 ************************************ 00:10:03.844 START TEST rpc 00:10:03.844 ************************************ 00:10:03.844 13:29:15 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:04.103 * Looking for test storage... 00:10:04.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:04.103 13:29:15 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:04.103 13:29:15 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:04.103 13:29:15 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:04.103 13:29:15 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:04.103 13:29:15 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.103 13:29:15 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.103 13:29:15 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.103 13:29:15 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.103 13:29:15 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.103 13:29:15 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.103 13:29:15 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.103 13:29:15 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.103 13:29:15 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.103 13:29:15 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.103 13:29:15 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.103 13:29:15 rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:04.103 13:29:15 rpc -- scripts/common.sh@345 -- # : 1 00:10:04.103 13:29:15 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.103 13:29:15 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.103 13:29:15 rpc -- scripts/common.sh@365 -- # decimal 1 00:10:04.103 13:29:15 rpc -- scripts/common.sh@353 -- # local d=1 00:10:04.103 13:29:15 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.103 13:29:15 rpc -- scripts/common.sh@355 -- # echo 1 00:10:04.103 13:29:15 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.103 13:29:15 rpc -- scripts/common.sh@366 -- # decimal 2 00:10:04.103 13:29:15 rpc -- scripts/common.sh@353 -- # local d=2 00:10:04.103 13:29:15 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.103 13:29:15 rpc -- scripts/common.sh@355 -- # echo 2 00:10:04.103 13:29:15 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.103 13:29:15 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.103 13:29:15 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.103 13:29:15 rpc -- scripts/common.sh@368 -- # return 0 00:10:04.103 13:29:15 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.103 13:29:15 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:04.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.103 --rc genhtml_branch_coverage=1 00:10:04.103 --rc genhtml_function_coverage=1 00:10:04.103 --rc genhtml_legend=1 00:10:04.103 --rc geninfo_all_blocks=1 00:10:04.103 --rc geninfo_unexecuted_blocks=1 00:10:04.103 00:10:04.103 ' 00:10:04.103 13:29:15 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:04.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.103 --rc genhtml_branch_coverage=1 00:10:04.103 --rc genhtml_function_coverage=1 00:10:04.103 --rc genhtml_legend=1 00:10:04.103 --rc geninfo_all_blocks=1 00:10:04.103 --rc geninfo_unexecuted_blocks=1 00:10:04.103 00:10:04.103 ' 00:10:04.103 13:29:15 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:04.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.103 --rc genhtml_branch_coverage=1 00:10:04.103 --rc genhtml_function_coverage=1 00:10:04.103 --rc genhtml_legend=1 00:10:04.103 --rc geninfo_all_blocks=1 00:10:04.103 --rc geninfo_unexecuted_blocks=1 00:10:04.103 00:10:04.103 ' 00:10:04.103 13:29:15 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:04.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.103 --rc genhtml_branch_coverage=1 00:10:04.103 --rc genhtml_function_coverage=1 00:10:04.103 --rc genhtml_legend=1 00:10:04.103 --rc geninfo_all_blocks=1 00:10:04.103 --rc geninfo_unexecuted_blocks=1 00:10:04.103 00:10:04.104 ' 00:10:04.104 13:29:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57955 00:10:04.104 13:29:15 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:10:04.104 13:29:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:04.104 13:29:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57955 00:10:04.104 13:29:15 rpc -- common/autotest_common.sh@835 -- # '[' -z 57955 ']' 00:10:04.104 13:29:15 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.104 13:29:15 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.104 13:29:15 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.104 13:29:15 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.104 13:29:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.363 [2024-11-20 13:29:16.065651] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:04.363 [2024-11-20 13:29:16.066015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57955 ] 00:10:04.363 [2024-11-20 13:29:16.251103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.622 [2024-11-20 13:29:16.402008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:04.622 [2024-11-20 13:29:16.402113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57955' to capture a snapshot of events at runtime. 00:10:04.622 [2024-11-20 13:29:16.402129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.622 [2024-11-20 13:29:16.402145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.622 [2024-11-20 13:29:16.402156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57955 for offline analysis/debug. 00:10:04.622 [2024-11-20 13:29:16.403660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.559 13:29:17 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.559 13:29:17 rpc -- common/autotest_common.sh@868 -- # return 0 00:10:05.559 13:29:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:05.559 13:29:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:05.559 13:29:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:10:05.559 13:29:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:10:05.559 13:29:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:05.559 13:29:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.559 13:29:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.559 ************************************ 00:10:05.559 START TEST rpc_integrity 00:10:05.559 ************************************ 00:10:05.559 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:10:05.559 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:05.559 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.559 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:05.559 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.818 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:05.818 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:05.818 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:05.818 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:05.818 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.818 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:05.818 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.818 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:10:05.818 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:05.818 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.818 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:05.818 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.818 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:05.818 { 00:10:05.818 "name": "Malloc0", 00:10:05.818 "aliases": [ 00:10:05.818 "021ebb19-72e3-4ae6-a97b-87d1947f62f6" 00:10:05.818 ], 00:10:05.818 "product_name": "Malloc disk", 00:10:05.818 "block_size": 512, 00:10:05.818 "num_blocks": 16384, 00:10:05.818 "uuid": "021ebb19-72e3-4ae6-a97b-87d1947f62f6", 00:10:05.818 "assigned_rate_limits": { 00:10:05.818 "rw_ios_per_sec": 0, 00:10:05.818 "rw_mbytes_per_sec": 0, 00:10:05.818 "r_mbytes_per_sec": 0, 00:10:05.818 "w_mbytes_per_sec": 0 00:10:05.818 }, 00:10:05.818 "claimed": false, 00:10:05.818 "zoned": false, 00:10:05.818 "supported_io_types": { 00:10:05.818 "read": true, 00:10:05.818 "write": true, 00:10:05.818 "unmap": true, 00:10:05.818 "flush": true, 00:10:05.818 "reset": true, 00:10:05.818 "nvme_admin": false, 00:10:05.818 "nvme_io": false, 00:10:05.818 "nvme_io_md": false, 00:10:05.818 "write_zeroes": true, 00:10:05.818 "zcopy": true, 00:10:05.818 "get_zone_info": false, 00:10:05.818 "zone_management": false, 00:10:05.818 "zone_append": false, 00:10:05.818 "compare": false, 00:10:05.818 "compare_and_write": false, 00:10:05.818 "abort": true, 00:10:05.818 "seek_hole": false, 00:10:05.818 "seek_data": false, 00:10:05.818 "copy": true, 00:10:05.818 "nvme_iov_md": false 00:10:05.818 }, 00:10:05.818 "memory_domains": [ 00:10:05.818 { 00:10:05.818 "dma_device_id": "system", 00:10:05.818 "dma_device_type": 1 00:10:05.818 }, 00:10:05.818 { 00:10:05.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.818 "dma_device_type": 2 00:10:05.818 } 00:10:05.818 ], 00:10:05.818 "driver_specific": {} 00:10:05.818 } 00:10:05.818 ]' 00:10:05.818 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:05.818 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:05.818 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:10:05.818 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.818 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:05.818 [2024-11-20 13:29:17.668612] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:10:05.818 [2024-11-20 13:29:17.668703] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.818 [2024-11-20 13:29:17.668735] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:05.818 [2024-11-20 13:29:17.668751] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.818 [2024-11-20 13:29:17.671544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.818 [2024-11-20 13:29:17.671615] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:05.818 Passthru0 00:10:05.818 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.818 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:05.818 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.818 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:05.818 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.818 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:05.818 { 00:10:05.818 "name": "Malloc0", 00:10:05.818 "aliases": [ 00:10:05.818 "021ebb19-72e3-4ae6-a97b-87d1947f62f6" 00:10:05.818 ], 00:10:05.818 "product_name": "Malloc disk", 00:10:05.818 "block_size": 512, 00:10:05.818 "num_blocks": 16384, 00:10:05.818 "uuid": "021ebb19-72e3-4ae6-a97b-87d1947f62f6", 00:10:05.818 "assigned_rate_limits": { 00:10:05.818 "rw_ios_per_sec": 0, 00:10:05.818 "rw_mbytes_per_sec": 0, 00:10:05.818 "r_mbytes_per_sec": 0, 00:10:05.818 "w_mbytes_per_sec": 0 00:10:05.818 }, 00:10:05.818 "claimed": true, 00:10:05.818 "claim_type": "exclusive_write", 00:10:05.818 "zoned": false, 00:10:05.818 "supported_io_types": { 00:10:05.818 "read": true, 00:10:05.818 "write": true, 00:10:05.818 "unmap": true, 00:10:05.818 "flush": true, 00:10:05.818 "reset": true, 00:10:05.818 "nvme_admin": false, 00:10:05.818 "nvme_io": false, 00:10:05.818 "nvme_io_md": false, 00:10:05.818 "write_zeroes": true, 00:10:05.818 "zcopy": true, 00:10:05.818 "get_zone_info": false, 00:10:05.818 "zone_management": false, 00:10:05.818 "zone_append": false, 00:10:05.818 "compare": false, 00:10:05.818 "compare_and_write": false, 00:10:05.818 "abort": true, 00:10:05.818 "seek_hole": false, 00:10:05.818 "seek_data": false, 00:10:05.818 "copy": true, 00:10:05.818 "nvme_iov_md": false 00:10:05.818 }, 00:10:05.818 "memory_domains": [ 00:10:05.818 { 00:10:05.818 "dma_device_id": "system", 00:10:05.818 "dma_device_type": 1 00:10:05.818 }, 00:10:05.818 { 00:10:05.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.818 "dma_device_type": 2 00:10:05.818 } 00:10:05.818 ], 00:10:05.818 "driver_specific": {} 00:10:05.818 }, 00:10:05.818 { 00:10:05.818 "name": "Passthru0", 00:10:05.818 "aliases": [ 00:10:05.818 "e7d58e98-507d-565b-8ed8-a6a220a9a10b" 00:10:05.818 ], 00:10:05.818 "product_name": "passthru", 00:10:05.818 "block_size": 512, 00:10:05.818 "num_blocks": 16384, 00:10:05.818 "uuid": "e7d58e98-507d-565b-8ed8-a6a220a9a10b", 00:10:05.818 "assigned_rate_limits": { 00:10:05.818 "rw_ios_per_sec": 0, 00:10:05.818 "rw_mbytes_per_sec": 0, 00:10:05.818 "r_mbytes_per_sec": 0, 00:10:05.818 "w_mbytes_per_sec": 0 00:10:05.818 }, 00:10:05.818 "claimed": false, 00:10:05.818 "zoned": false, 00:10:05.818 "supported_io_types": { 00:10:05.818 "read": true, 00:10:05.818 "write": true, 00:10:05.818 "unmap": true, 00:10:05.818 "flush": true, 00:10:05.818 "reset": true, 00:10:05.818 "nvme_admin": false, 00:10:05.818 "nvme_io": false, 00:10:05.818 "nvme_io_md": false, 00:10:05.818 "write_zeroes": true, 00:10:05.818 "zcopy": true, 00:10:05.818 "get_zone_info": false, 00:10:05.818 "zone_management": false, 00:10:05.818 "zone_append": false, 00:10:05.818 "compare": false, 00:10:05.818 "compare_and_write": false, 00:10:05.818 "abort": true, 00:10:05.818 "seek_hole": false, 00:10:05.818 "seek_data": false, 00:10:05.818 "copy": true, 00:10:05.818 "nvme_iov_md": false 00:10:05.818 }, 00:10:05.818 "memory_domains": [ 00:10:05.818 { 00:10:05.818 "dma_device_id": "system", 00:10:05.818 "dma_device_type": 1 00:10:05.818 }, 00:10:05.818 { 00:10:05.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.818 "dma_device_type": 2 00:10:05.818 } 00:10:05.818 ], 00:10:05.818 "driver_specific": { 00:10:05.818 "passthru": { 00:10:05.818 "name": "Passthru0", 00:10:05.818 "base_bdev_name": "Malloc0" 00:10:05.818 } 00:10:05.818 } 00:10:05.818 } 00:10:05.818 ]' 00:10:05.818 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:05.818 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:05.818 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:05.818 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.818 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:05.818 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.077 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:10:06.077 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.077 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.077 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.077 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:06.077 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.077 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.077 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.077 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:06.077 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:06.077 ************************************ 00:10:06.077 END TEST rpc_integrity 00:10:06.077 ************************************ 00:10:06.077 13:29:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:06.077 00:10:06.077 real 0m0.374s 00:10:06.077 user 0m0.210s 00:10:06.077 sys 0m0.064s 00:10:06.077 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.077 13:29:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.077 13:29:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:10:06.077 13:29:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:06.077 13:29:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.077 13:29:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.077 ************************************ 00:10:06.077 START TEST rpc_plugins 00:10:06.077 ************************************ 00:10:06.077 13:29:17 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:10:06.077 13:29:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:10:06.077 13:29:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.077 13:29:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:06.077 13:29:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.077 13:29:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:10:06.077 13:29:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:10:06.077 13:29:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.077 13:29:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:06.077 13:29:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.077 13:29:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:10:06.077 { 00:10:06.077 "name": "Malloc1", 00:10:06.077 "aliases": [ 00:10:06.077 "f7dd4d96-fa2c-407b-997f-678b89ec3c91" 00:10:06.077 ], 00:10:06.077 "product_name": "Malloc disk", 00:10:06.077 "block_size": 4096, 00:10:06.077 "num_blocks": 256, 00:10:06.077 "uuid": "f7dd4d96-fa2c-407b-997f-678b89ec3c91", 00:10:06.077 "assigned_rate_limits": { 00:10:06.077 "rw_ios_per_sec": 0, 00:10:06.077 "rw_mbytes_per_sec": 0, 00:10:06.077 "r_mbytes_per_sec": 0, 00:10:06.077 "w_mbytes_per_sec": 0 00:10:06.077 }, 00:10:06.077 "claimed": false, 00:10:06.077 "zoned": false, 00:10:06.077 "supported_io_types": { 00:10:06.077 "read": true, 00:10:06.077 "write": true, 00:10:06.077 "unmap": true, 00:10:06.077 "flush": true, 00:10:06.077 "reset": true, 00:10:06.077 "nvme_admin": false, 00:10:06.078 "nvme_io": false, 00:10:06.078 "nvme_io_md": false, 00:10:06.078 "write_zeroes": true, 00:10:06.078 "zcopy": true, 00:10:06.078 "get_zone_info": false, 00:10:06.078 "zone_management": false, 00:10:06.078 "zone_append": false, 00:10:06.078 "compare": false, 00:10:06.078 "compare_and_write": false, 00:10:06.078 "abort": true, 00:10:06.078 "seek_hole": false, 00:10:06.078 "seek_data": false, 00:10:06.078 "copy": true, 00:10:06.078 "nvme_iov_md": false 00:10:06.078 }, 00:10:06.078 "memory_domains": [ 00:10:06.078 { 00:10:06.078 "dma_device_id": "system", 00:10:06.078 "dma_device_type": 1 00:10:06.078 }, 00:10:06.078 { 00:10:06.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.078 "dma_device_type": 2 00:10:06.078 } 00:10:06.078 ], 00:10:06.078 "driver_specific": {} 00:10:06.078 } 00:10:06.078 ]' 00:10:06.078 13:29:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:10:06.337 13:29:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:10:06.337 13:29:18 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:10:06.337 13:29:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.337 13:29:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:06.337 13:29:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.337 13:29:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:10:06.337 13:29:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.337 13:29:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:06.337 13:29:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.337 13:29:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:10:06.337 13:29:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:10:06.337 ************************************ 00:10:06.337 END TEST rpc_plugins 00:10:06.337 ************************************ 00:10:06.337 13:29:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:10:06.337 00:10:06.337 real 0m0.172s 00:10:06.337 user 0m0.097s 00:10:06.337 sys 0m0.033s 00:10:06.337 13:29:18 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.337 13:29:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:06.337 13:29:18 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:10:06.337 13:29:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:06.337 13:29:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.337 13:29:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.337 ************************************ 00:10:06.337 START TEST rpc_trace_cmd_test 00:10:06.337 ************************************ 00:10:06.337 13:29:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:10:06.337 13:29:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:10:06.337 13:29:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:10:06.337 13:29:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.337 13:29:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.337 13:29:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.337 13:29:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:10:06.337 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57955", 00:10:06.337 "tpoint_group_mask": "0x8", 00:10:06.337 "iscsi_conn": { 00:10:06.337 "mask": "0x2", 00:10:06.337 "tpoint_mask": "0x0" 00:10:06.337 }, 00:10:06.337 "scsi": { 00:10:06.337 "mask": "0x4", 00:10:06.337 "tpoint_mask": "0x0" 00:10:06.337 }, 00:10:06.337 "bdev": { 00:10:06.337 "mask": "0x8", 00:10:06.337 "tpoint_mask": "0xffffffffffffffff" 00:10:06.337 }, 00:10:06.337 "nvmf_rdma": { 00:10:06.337 "mask": "0x10", 00:10:06.337 "tpoint_mask": "0x0" 00:10:06.337 }, 00:10:06.337 "nvmf_tcp": { 00:10:06.337 "mask": "0x20", 00:10:06.337 "tpoint_mask": "0x0" 00:10:06.337 }, 00:10:06.337 "ftl": { 00:10:06.337 "mask": "0x40", 00:10:06.337 "tpoint_mask": "0x0" 00:10:06.337 }, 00:10:06.337 "blobfs": { 00:10:06.337 "mask": "0x80", 00:10:06.337 "tpoint_mask": "0x0" 00:10:06.337 }, 00:10:06.337 "dsa": { 00:10:06.337 "mask": "0x200", 00:10:06.337 "tpoint_mask": "0x0" 00:10:06.337 }, 00:10:06.338 "thread": { 00:10:06.338 "mask": "0x400", 00:10:06.338 "tpoint_mask": "0x0" 00:10:06.338 }, 00:10:06.338 "nvme_pcie": { 00:10:06.338 "mask": "0x800", 00:10:06.338 "tpoint_mask": "0x0" 00:10:06.338 }, 00:10:06.338 "iaa": { 00:10:06.338 "mask": "0x1000", 00:10:06.338 "tpoint_mask": "0x0" 00:10:06.338 }, 00:10:06.338 "nvme_tcp": { 00:10:06.338 "mask": "0x2000", 00:10:06.338 "tpoint_mask": "0x0" 00:10:06.338 }, 00:10:06.338 "bdev_nvme": { 00:10:06.338 "mask": "0x4000", 00:10:06.338 "tpoint_mask": "0x0" 00:10:06.338 }, 00:10:06.338 "sock": { 00:10:06.338 "mask": "0x8000", 00:10:06.338 "tpoint_mask": "0x0" 00:10:06.338 }, 00:10:06.338 "blob": { 00:10:06.338 "mask": "0x10000", 00:10:06.338 "tpoint_mask": "0x0" 00:10:06.338 }, 00:10:06.338 "bdev_raid": { 00:10:06.338 "mask": "0x20000", 00:10:06.338 "tpoint_mask": "0x0" 00:10:06.338 }, 00:10:06.338 "scheduler": { 00:10:06.338 "mask": "0x40000", 00:10:06.338 "tpoint_mask": "0x0" 00:10:06.338 } 00:10:06.338 }' 00:10:06.338 13:29:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:10:06.338 13:29:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:10:06.338 13:29:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:10:06.597 13:29:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:10:06.597 13:29:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:10:06.597 13:29:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:10:06.597 13:29:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:10:06.597 13:29:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:10:06.597 13:29:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:10:06.597 ************************************ 00:10:06.597 END TEST rpc_trace_cmd_test 00:10:06.597 ************************************ 00:10:06.597 13:29:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:10:06.597 00:10:06.597 real 0m0.278s 00:10:06.597 user 0m0.213s 00:10:06.597 sys 0m0.050s 00:10:06.597 13:29:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.597 13:29:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.597 13:29:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:10:06.597 13:29:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:10:06.597 13:29:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:10:06.597 13:29:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:06.597 13:29:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.597 13:29:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.597 ************************************ 00:10:06.597 START TEST rpc_daemon_integrity 00:10:06.597 ************************************ 00:10:06.597 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:10:06.597 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:06.597 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.597 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:06.857 { 00:10:06.857 "name": "Malloc2", 00:10:06.857 "aliases": [ 00:10:06.857 "93a51e62-509c-487e-8779-a94a69bfe522" 00:10:06.857 ], 00:10:06.857 "product_name": "Malloc disk", 00:10:06.857 "block_size": 512, 00:10:06.857 "num_blocks": 16384, 00:10:06.857 "uuid": "93a51e62-509c-487e-8779-a94a69bfe522", 00:10:06.857 "assigned_rate_limits": { 00:10:06.857 "rw_ios_per_sec": 0, 00:10:06.857 "rw_mbytes_per_sec": 0, 00:10:06.857 "r_mbytes_per_sec": 0, 00:10:06.857 "w_mbytes_per_sec": 0 00:10:06.857 }, 00:10:06.857 "claimed": false, 00:10:06.857 "zoned": false, 00:10:06.857 "supported_io_types": { 00:10:06.857 "read": true, 00:10:06.857 "write": true, 00:10:06.857 "unmap": true, 00:10:06.857 "flush": true, 00:10:06.857 "reset": true, 00:10:06.857 "nvme_admin": false, 00:10:06.857 "nvme_io": false, 00:10:06.857 "nvme_io_md": false, 00:10:06.857 "write_zeroes": true, 00:10:06.857 "zcopy": true, 00:10:06.857 "get_zone_info": false, 00:10:06.857 "zone_management": false, 00:10:06.857 "zone_append": false, 00:10:06.857 "compare": false, 00:10:06.857 "compare_and_write": false, 00:10:06.857 "abort": true, 00:10:06.857 "seek_hole": false, 00:10:06.857 "seek_data": false, 00:10:06.857 "copy": true, 00:10:06.857 "nvme_iov_md": false 00:10:06.857 }, 00:10:06.857 "memory_domains": [ 00:10:06.857 { 00:10:06.857 "dma_device_id": "system", 00:10:06.857 "dma_device_type": 1 00:10:06.857 }, 00:10:06.857 { 00:10:06.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.857 "dma_device_type": 2 00:10:06.857 } 00:10:06.857 ], 00:10:06.857 "driver_specific": {} 00:10:06.857 } 00:10:06.857 ]' 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.857 [2024-11-20 13:29:18.719130] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:10:06.857 [2024-11-20 13:29:18.719208] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.857 [2024-11-20 13:29:18.719235] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:06.857 [2024-11-20 13:29:18.719250] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.857 [2024-11-20 13:29:18.721899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.857 [2024-11-20 13:29:18.721947] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:06.857 Passthru0 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.857 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:06.857 { 00:10:06.857 "name": "Malloc2", 00:10:06.857 "aliases": [ 00:10:06.857 "93a51e62-509c-487e-8779-a94a69bfe522" 00:10:06.857 ], 00:10:06.857 "product_name": "Malloc disk", 00:10:06.857 "block_size": 512, 00:10:06.857 "num_blocks": 16384, 00:10:06.857 "uuid": "93a51e62-509c-487e-8779-a94a69bfe522", 00:10:06.857 "assigned_rate_limits": { 00:10:06.857 "rw_ios_per_sec": 0, 00:10:06.857 "rw_mbytes_per_sec": 0, 00:10:06.857 "r_mbytes_per_sec": 0, 00:10:06.857 "w_mbytes_per_sec": 0 00:10:06.857 }, 00:10:06.857 "claimed": true, 00:10:06.857 "claim_type": "exclusive_write", 00:10:06.857 "zoned": false, 00:10:06.857 "supported_io_types": { 00:10:06.857 "read": true, 00:10:06.857 "write": true, 00:10:06.857 "unmap": true, 00:10:06.857 "flush": true, 00:10:06.857 "reset": true, 00:10:06.857 "nvme_admin": false, 00:10:06.857 "nvme_io": false, 00:10:06.857 "nvme_io_md": false, 00:10:06.857 "write_zeroes": true, 00:10:06.857 "zcopy": true, 00:10:06.857 "get_zone_info": false, 00:10:06.857 "zone_management": false, 00:10:06.857 "zone_append": false, 00:10:06.857 "compare": false, 00:10:06.857 "compare_and_write": false, 00:10:06.857 "abort": true, 00:10:06.857 "seek_hole": false, 00:10:06.857 "seek_data": false, 00:10:06.857 "copy": true, 00:10:06.857 "nvme_iov_md": false 00:10:06.857 }, 00:10:06.857 "memory_domains": [ 00:10:06.857 { 00:10:06.857 "dma_device_id": "system", 00:10:06.858 "dma_device_type": 1 00:10:06.858 }, 00:10:06.858 { 00:10:06.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.858 "dma_device_type": 2 00:10:06.858 } 00:10:06.858 ], 00:10:06.858 "driver_specific": {} 00:10:06.858 }, 00:10:06.858 { 00:10:06.858 "name": "Passthru0", 00:10:06.858 "aliases": [ 00:10:06.858 "acefdc1e-e08e-565b-a9c2-86517249ebc7" 00:10:06.858 ], 00:10:06.858 "product_name": "passthru", 00:10:06.858 "block_size": 512, 00:10:06.858 "num_blocks": 16384, 00:10:06.858 "uuid": "acefdc1e-e08e-565b-a9c2-86517249ebc7", 00:10:06.858 "assigned_rate_limits": { 00:10:06.858 "rw_ios_per_sec": 0, 00:10:06.858 "rw_mbytes_per_sec": 0, 00:10:06.858 "r_mbytes_per_sec": 0, 00:10:06.858 "w_mbytes_per_sec": 0 00:10:06.858 }, 00:10:06.858 "claimed": false, 00:10:06.858 "zoned": false, 00:10:06.858 "supported_io_types": { 00:10:06.858 "read": true, 00:10:06.858 "write": true, 00:10:06.858 "unmap": true, 00:10:06.858 "flush": true, 00:10:06.858 "reset": true, 00:10:06.858 "nvme_admin": false, 00:10:06.858 "nvme_io": false, 00:10:06.858 "nvme_io_md": false, 00:10:06.858 "write_zeroes": true, 00:10:06.858 "zcopy": true, 00:10:06.858 "get_zone_info": false, 00:10:06.858 "zone_management": false, 00:10:06.858 "zone_append": false, 00:10:06.858 "compare": false, 00:10:06.858 "compare_and_write": false, 00:10:06.858 "abort": true, 00:10:06.858 "seek_hole": false, 00:10:06.858 "seek_data": false, 00:10:06.858 "copy": true, 00:10:06.858 "nvme_iov_md": false 00:10:06.858 }, 00:10:06.858 "memory_domains": [ 00:10:06.858 { 00:10:06.858 "dma_device_id": "system", 00:10:06.858 "dma_device_type": 1 00:10:06.858 }, 00:10:06.858 { 00:10:06.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.858 "dma_device_type": 2 00:10:06.858 } 00:10:06.858 ], 00:10:06.858 "driver_specific": { 00:10:06.858 "passthru": { 00:10:06.858 "name": "Passthru0", 00:10:06.858 "base_bdev_name": "Malloc2" 00:10:06.858 } 00:10:06.858 } 00:10:06.858 } 00:10:06.858 ]' 00:10:06.858 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:06.858 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:06.858 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:06.858 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.858 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:07.118 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.118 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:07.118 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.118 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:07.118 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.118 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:07.118 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.118 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:07.118 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.118 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:07.118 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:07.118 ************************************ 00:10:07.118 END TEST rpc_daemon_integrity 00:10:07.118 ************************************ 00:10:07.118 13:29:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:07.118 00:10:07.118 real 0m0.369s 00:10:07.118 user 0m0.195s 00:10:07.118 sys 0m0.070s 00:10:07.118 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.118 13:29:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:07.118 13:29:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:07.118 13:29:18 rpc -- rpc/rpc.sh@84 -- # killprocess 57955 00:10:07.118 13:29:18 rpc -- common/autotest_common.sh@954 -- # '[' -z 57955 ']' 00:10:07.118 13:29:18 rpc -- common/autotest_common.sh@958 -- # kill -0 57955 00:10:07.118 13:29:18 rpc -- common/autotest_common.sh@959 -- # uname 00:10:07.118 13:29:18 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.118 13:29:18 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57955 00:10:07.118 13:29:19 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.118 13:29:19 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.118 killing process with pid 57955 00:10:07.118 13:29:19 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57955' 00:10:07.118 13:29:19 rpc -- common/autotest_common.sh@973 -- # kill 57955 00:10:07.118 13:29:19 rpc -- common/autotest_common.sh@978 -- # wait 57955 00:10:09.713 00:10:09.713 real 0m5.737s 00:10:09.713 user 0m6.185s 00:10:09.713 sys 0m1.156s 00:10:09.713 13:29:21 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.713 ************************************ 00:10:09.713 END TEST rpc 00:10:09.713 ************************************ 00:10:09.713 13:29:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.713 13:29:21 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:09.713 13:29:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:09.713 13:29:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.713 13:29:21 -- common/autotest_common.sh@10 -- # set +x 00:10:09.713 ************************************ 00:10:09.713 START TEST skip_rpc 00:10:09.713 ************************************ 00:10:09.713 13:29:21 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:09.972 * Looking for test storage... 00:10:09.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:09.972 13:29:21 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:09.972 13:29:21 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:09.972 13:29:21 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:09.972 13:29:21 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:09.972 13:29:21 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.972 13:29:21 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.972 13:29:21 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.972 13:29:21 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.972 13:29:21 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.972 13:29:21 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.972 13:29:21 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.972 13:29:21 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.972 13:29:21 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.972 13:29:21 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.972 13:29:21 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.972 13:29:21 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:09.972 13:29:21 skip_rpc -- scripts/common.sh@345 -- # : 1 00:10:09.972 13:29:21 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.972 13:29:21 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.972 13:29:21 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:09.973 13:29:21 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:10:09.973 13:29:21 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.973 13:29:21 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:10:09.973 13:29:21 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.973 13:29:21 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:09.973 13:29:21 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:10:09.973 13:29:21 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.973 13:29:21 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:10:09.973 13:29:21 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.973 13:29:21 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.973 13:29:21 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.973 13:29:21 skip_rpc -- scripts/common.sh@368 -- # return 0 00:10:09.973 13:29:21 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.973 13:29:21 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:09.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.973 --rc genhtml_branch_coverage=1 00:10:09.973 --rc genhtml_function_coverage=1 00:10:09.973 --rc genhtml_legend=1 00:10:09.973 --rc geninfo_all_blocks=1 00:10:09.973 --rc geninfo_unexecuted_blocks=1 00:10:09.973 00:10:09.973 ' 00:10:09.973 13:29:21 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:09.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.973 --rc genhtml_branch_coverage=1 00:10:09.973 --rc genhtml_function_coverage=1 00:10:09.973 --rc genhtml_legend=1 00:10:09.973 --rc geninfo_all_blocks=1 00:10:09.973 --rc geninfo_unexecuted_blocks=1 00:10:09.973 00:10:09.973 ' 00:10:09.973 13:29:21 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:09.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.973 --rc genhtml_branch_coverage=1 00:10:09.973 --rc genhtml_function_coverage=1 00:10:09.973 --rc genhtml_legend=1 00:10:09.973 --rc geninfo_all_blocks=1 00:10:09.973 --rc geninfo_unexecuted_blocks=1 00:10:09.973 00:10:09.973 ' 00:10:09.973 13:29:21 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:09.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.973 --rc genhtml_branch_coverage=1 00:10:09.973 --rc genhtml_function_coverage=1 00:10:09.973 --rc genhtml_legend=1 00:10:09.973 --rc geninfo_all_blocks=1 00:10:09.973 --rc geninfo_unexecuted_blocks=1 00:10:09.973 00:10:09.973 ' 00:10:09.973 13:29:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:09.973 13:29:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:09.973 13:29:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:10:09.973 13:29:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:09.973 13:29:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.973 13:29:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.973 ************************************ 00:10:09.973 START TEST skip_rpc 00:10:09.973 ************************************ 00:10:09.973 13:29:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:10:09.973 13:29:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58195 00:10:09.973 13:29:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:10:09.973 13:29:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:09.973 13:29:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:10:09.973 [2024-11-20 13:29:21.928882] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:10.232 [2024-11-20 13:29:21.929226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58195 ] 00:10:10.232 [2024-11-20 13:29:22.116419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.491 [2024-11-20 13:29:22.235616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58195 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58195 ']' 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58195 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58195 00:10:15.761 killing process with pid 58195 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.761 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.762 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58195' 00:10:15.762 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58195 00:10:15.762 13:29:26 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58195 00:10:17.672 00:10:17.672 real 0m7.512s 00:10:17.672 user 0m6.985s 00:10:17.672 sys 0m0.449s 00:10:17.672 13:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.672 13:29:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.672 ************************************ 00:10:17.672 END TEST skip_rpc 00:10:17.672 ************************************ 00:10:17.672 13:29:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:17.672 13:29:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:17.672 13:29:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.672 13:29:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.672 ************************************ 00:10:17.672 START TEST skip_rpc_with_json 00:10:17.672 ************************************ 00:10:17.672 13:29:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:10:17.672 13:29:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:17.672 13:29:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58299 00:10:17.672 13:29:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:17.672 13:29:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58299 00:10:17.672 13:29:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:17.672 13:29:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58299 ']' 00:10:17.672 13:29:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.672 13:29:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.672 13:29:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.672 13:29:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.672 13:29:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:17.672 [2024-11-20 13:29:29.514077] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:17.672 [2024-11-20 13:29:29.514212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58299 ] 00:10:17.931 [2024-11-20 13:29:29.698685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.931 [2024-11-20 13:29:29.810462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.864 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.864 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:10:18.864 13:29:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:18.864 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.864 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:18.864 [2024-11-20 13:29:30.648724] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:18.864 request: 00:10:18.864 { 00:10:18.864 "trtype": "tcp", 00:10:18.864 "method": "nvmf_get_transports", 00:10:18.864 "req_id": 1 00:10:18.864 } 00:10:18.864 Got JSON-RPC error response 00:10:18.864 response: 00:10:18.864 { 00:10:18.864 "code": -19, 00:10:18.864 "message": "No such device" 00:10:18.864 } 00:10:18.864 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:18.864 13:29:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:18.864 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.864 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:18.864 [2024-11-20 13:29:30.664835] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.864 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.864 13:29:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:18.864 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.864 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:19.123 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.123 13:29:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:19.123 { 00:10:19.123 "subsystems": [ 00:10:19.123 { 00:10:19.123 "subsystem": "fsdev", 00:10:19.123 "config": [ 00:10:19.123 { 00:10:19.123 "method": "fsdev_set_opts", 00:10:19.123 "params": { 00:10:19.123 "fsdev_io_pool_size": 65535, 00:10:19.123 "fsdev_io_cache_size": 256 00:10:19.123 } 00:10:19.123 } 00:10:19.123 ] 00:10:19.123 }, 00:10:19.123 { 00:10:19.123 "subsystem": "keyring", 00:10:19.123 "config": [] 00:10:19.123 }, 00:10:19.123 { 00:10:19.123 "subsystem": "iobuf", 00:10:19.123 "config": [ 00:10:19.123 { 00:10:19.123 "method": "iobuf_set_options", 00:10:19.123 "params": { 00:10:19.123 "small_pool_count": 8192, 00:10:19.123 "large_pool_count": 1024, 00:10:19.123 "small_bufsize": 8192, 00:10:19.123 "large_bufsize": 135168, 00:10:19.123 "enable_numa": false 00:10:19.123 } 00:10:19.123 } 00:10:19.123 ] 00:10:19.123 }, 00:10:19.123 { 00:10:19.123 "subsystem": "sock", 00:10:19.123 "config": [ 00:10:19.123 { 00:10:19.123 "method": "sock_set_default_impl", 00:10:19.123 "params": { 00:10:19.123 "impl_name": "posix" 00:10:19.123 } 00:10:19.123 }, 00:10:19.123 { 00:10:19.123 "method": "sock_impl_set_options", 00:10:19.123 "params": { 00:10:19.123 "impl_name": "ssl", 00:10:19.123 "recv_buf_size": 4096, 00:10:19.123 "send_buf_size": 4096, 00:10:19.123 "enable_recv_pipe": true, 00:10:19.123 "enable_quickack": false, 00:10:19.123 "enable_placement_id": 0, 00:10:19.123 "enable_zerocopy_send_server": true, 00:10:19.123 "enable_zerocopy_send_client": false, 00:10:19.123 "zerocopy_threshold": 0, 00:10:19.123 "tls_version": 0, 00:10:19.123 "enable_ktls": false 00:10:19.123 } 00:10:19.123 }, 00:10:19.123 { 00:10:19.123 "method": "sock_impl_set_options", 00:10:19.123 "params": { 00:10:19.123 "impl_name": "posix", 00:10:19.123 "recv_buf_size": 2097152, 00:10:19.123 "send_buf_size": 2097152, 00:10:19.123 "enable_recv_pipe": true, 00:10:19.123 "enable_quickack": false, 00:10:19.123 "enable_placement_id": 0, 00:10:19.123 "enable_zerocopy_send_server": true, 00:10:19.123 "enable_zerocopy_send_client": false, 00:10:19.123 "zerocopy_threshold": 0, 00:10:19.123 "tls_version": 0, 00:10:19.123 "enable_ktls": false 00:10:19.123 } 00:10:19.123 } 00:10:19.123 ] 00:10:19.123 }, 00:10:19.123 { 00:10:19.123 "subsystem": "vmd", 00:10:19.123 "config": [] 00:10:19.123 }, 00:10:19.123 { 00:10:19.123 "subsystem": "accel", 00:10:19.123 "config": [ 00:10:19.123 { 00:10:19.123 "method": "accel_set_options", 00:10:19.123 "params": { 00:10:19.123 "small_cache_size": 128, 00:10:19.123 "large_cache_size": 16, 00:10:19.123 "task_count": 2048, 00:10:19.123 "sequence_count": 2048, 00:10:19.123 "buf_count": 2048 00:10:19.123 } 00:10:19.123 } 00:10:19.123 ] 00:10:19.123 }, 00:10:19.123 { 00:10:19.123 "subsystem": "bdev", 00:10:19.123 "config": [ 00:10:19.123 { 00:10:19.123 "method": "bdev_set_options", 00:10:19.123 "params": { 00:10:19.123 "bdev_io_pool_size": 65535, 00:10:19.123 "bdev_io_cache_size": 256, 00:10:19.123 "bdev_auto_examine": true, 00:10:19.123 "iobuf_small_cache_size": 128, 00:10:19.123 "iobuf_large_cache_size": 16 00:10:19.123 } 00:10:19.123 }, 00:10:19.123 { 00:10:19.123 "method": "bdev_raid_set_options", 00:10:19.123 "params": { 00:10:19.123 "process_window_size_kb": 1024, 00:10:19.124 "process_max_bandwidth_mb_sec": 0 00:10:19.124 } 00:10:19.124 }, 00:10:19.124 { 00:10:19.124 "method": "bdev_iscsi_set_options", 00:10:19.124 "params": { 00:10:19.124 "timeout_sec": 30 00:10:19.124 } 00:10:19.124 }, 00:10:19.124 { 00:10:19.124 "method": "bdev_nvme_set_options", 00:10:19.124 "params": { 00:10:19.124 "action_on_timeout": "none", 00:10:19.124 "timeout_us": 0, 00:10:19.124 "timeout_admin_us": 0, 00:10:19.124 "keep_alive_timeout_ms": 10000, 00:10:19.124 "arbitration_burst": 0, 00:10:19.124 "low_priority_weight": 0, 00:10:19.124 "medium_priority_weight": 0, 00:10:19.124 "high_priority_weight": 0, 00:10:19.124 "nvme_adminq_poll_period_us": 10000, 00:10:19.124 "nvme_ioq_poll_period_us": 0, 00:10:19.124 "io_queue_requests": 0, 00:10:19.124 "delay_cmd_submit": true, 00:10:19.124 "transport_retry_count": 4, 00:10:19.124 "bdev_retry_count": 3, 00:10:19.124 "transport_ack_timeout": 0, 00:10:19.124 "ctrlr_loss_timeout_sec": 0, 00:10:19.124 "reconnect_delay_sec": 0, 00:10:19.124 "fast_io_fail_timeout_sec": 0, 00:10:19.124 "disable_auto_failback": false, 00:10:19.124 "generate_uuids": false, 00:10:19.124 "transport_tos": 0, 00:10:19.124 "nvme_error_stat": false, 00:10:19.124 "rdma_srq_size": 0, 00:10:19.124 "io_path_stat": false, 00:10:19.124 "allow_accel_sequence": false, 00:10:19.124 "rdma_max_cq_size": 0, 00:10:19.124 "rdma_cm_event_timeout_ms": 0, 00:10:19.124 "dhchap_digests": [ 00:10:19.124 "sha256", 00:10:19.124 "sha384", 00:10:19.124 "sha512" 00:10:19.124 ], 00:10:19.124 "dhchap_dhgroups": [ 00:10:19.124 "null", 00:10:19.124 "ffdhe2048", 00:10:19.124 "ffdhe3072", 00:10:19.124 "ffdhe4096", 00:10:19.124 "ffdhe6144", 00:10:19.124 "ffdhe8192" 00:10:19.124 ] 00:10:19.124 } 00:10:19.124 }, 00:10:19.124 { 00:10:19.124 "method": "bdev_nvme_set_hotplug", 00:10:19.124 "params": { 00:10:19.124 "period_us": 100000, 00:10:19.124 "enable": false 00:10:19.124 } 00:10:19.124 }, 00:10:19.124 { 00:10:19.124 "method": "bdev_wait_for_examine" 00:10:19.124 } 00:10:19.124 ] 00:10:19.124 }, 00:10:19.124 { 00:10:19.124 "subsystem": "scsi", 00:10:19.124 "config": null 00:10:19.124 }, 00:10:19.124 { 00:10:19.124 "subsystem": "scheduler", 00:10:19.124 "config": [ 00:10:19.124 { 00:10:19.124 "method": "framework_set_scheduler", 00:10:19.124 "params": { 00:10:19.124 "name": "static" 00:10:19.124 } 00:10:19.124 } 00:10:19.124 ] 00:10:19.124 }, 00:10:19.124 { 00:10:19.124 "subsystem": "vhost_scsi", 00:10:19.124 "config": [] 00:10:19.124 }, 00:10:19.124 { 00:10:19.124 "subsystem": "vhost_blk", 00:10:19.124 "config": [] 00:10:19.124 }, 00:10:19.124 { 00:10:19.124 "subsystem": "ublk", 00:10:19.124 "config": [] 00:10:19.124 }, 00:10:19.124 { 00:10:19.124 "subsystem": "nbd", 00:10:19.124 "config": [] 00:10:19.124 }, 00:10:19.124 { 00:10:19.124 "subsystem": "nvmf", 00:10:19.124 "config": [ 00:10:19.124 { 00:10:19.124 "method": "nvmf_set_config", 00:10:19.124 "params": { 00:10:19.124 "discovery_filter": "match_any", 00:10:19.124 "admin_cmd_passthru": { 00:10:19.124 "identify_ctrlr": false 00:10:19.124 }, 00:10:19.124 "dhchap_digests": [ 00:10:19.124 "sha256", 00:10:19.124 "sha384", 00:10:19.124 "sha512" 00:10:19.124 ], 00:10:19.124 "dhchap_dhgroups": [ 00:10:19.124 "null", 00:10:19.124 "ffdhe2048", 00:10:19.124 "ffdhe3072", 00:10:19.124 "ffdhe4096", 00:10:19.124 "ffdhe6144", 00:10:19.124 "ffdhe8192" 00:10:19.124 ] 00:10:19.124 } 00:10:19.124 }, 00:10:19.124 { 00:10:19.124 "method": "nvmf_set_max_subsystems", 00:10:19.124 "params": { 00:10:19.124 "max_subsystems": 1024 00:10:19.124 } 00:10:19.124 }, 00:10:19.124 { 00:10:19.124 "method": "nvmf_set_crdt", 00:10:19.124 "params": { 00:10:19.124 "crdt1": 0, 00:10:19.124 "crdt2": 0, 00:10:19.124 "crdt3": 0 00:10:19.124 } 00:10:19.124 }, 00:10:19.124 { 00:10:19.124 "method": "nvmf_create_transport", 00:10:19.124 "params": { 00:10:19.124 "trtype": "TCP", 00:10:19.124 "max_queue_depth": 128, 00:10:19.124 "max_io_qpairs_per_ctrlr": 127, 00:10:19.124 "in_capsule_data_size": 4096, 00:10:19.124 "max_io_size": 131072, 00:10:19.124 "io_unit_size": 131072, 00:10:19.124 "max_aq_depth": 128, 00:10:19.124 "num_shared_buffers": 511, 00:10:19.124 "buf_cache_size": 4294967295, 00:10:19.124 "dif_insert_or_strip": false, 00:10:19.124 "zcopy": false, 00:10:19.124 "c2h_success": true, 00:10:19.124 "sock_priority": 0, 00:10:19.124 "abort_timeout_sec": 1, 00:10:19.124 "ack_timeout": 0, 00:10:19.124 "data_wr_pool_size": 0 00:10:19.124 } 00:10:19.124 } 00:10:19.124 ] 00:10:19.124 }, 00:10:19.124 { 00:10:19.124 "subsystem": "iscsi", 00:10:19.124 "config": [ 00:10:19.124 { 00:10:19.124 "method": "iscsi_set_options", 00:10:19.124 "params": { 00:10:19.124 "node_base": "iqn.2016-06.io.spdk", 00:10:19.124 "max_sessions": 128, 00:10:19.124 "max_connections_per_session": 2, 00:10:19.124 "max_queue_depth": 64, 00:10:19.124 "default_time2wait": 2, 00:10:19.124 "default_time2retain": 20, 00:10:19.124 "first_burst_length": 8192, 00:10:19.124 "immediate_data": true, 00:10:19.124 "allow_duplicated_isid": false, 00:10:19.124 "error_recovery_level": 0, 00:10:19.124 "nop_timeout": 60, 00:10:19.124 "nop_in_interval": 30, 00:10:19.124 "disable_chap": false, 00:10:19.124 "require_chap": false, 00:10:19.124 "mutual_chap": false, 00:10:19.124 "chap_group": 0, 00:10:19.124 "max_large_datain_per_connection": 64, 00:10:19.124 "max_r2t_per_connection": 4, 00:10:19.124 "pdu_pool_size": 36864, 00:10:19.124 "immediate_data_pool_size": 16384, 00:10:19.124 "data_out_pool_size": 2048 00:10:19.124 } 00:10:19.124 } 00:10:19.124 ] 00:10:19.124 } 00:10:19.124 ] 00:10:19.124 } 00:10:19.124 13:29:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:19.124 13:29:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58299 00:10:19.124 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58299 ']' 00:10:19.124 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58299 00:10:19.124 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:19.124 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.124 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58299 00:10:19.124 killing process with pid 58299 00:10:19.124 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.124 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.124 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58299' 00:10:19.124 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58299 00:10:19.124 13:29:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58299 00:10:21.670 13:29:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58355 00:10:21.670 13:29:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:21.670 13:29:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:26.942 13:29:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58355 00:10:26.942 13:29:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58355 ']' 00:10:26.942 13:29:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58355 00:10:26.942 13:29:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:26.942 13:29:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.942 13:29:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58355 00:10:26.942 killing process with pid 58355 00:10:26.942 13:29:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.942 13:29:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.942 13:29:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58355' 00:10:26.942 13:29:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58355 00:10:26.942 13:29:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58355 00:10:29.474 13:29:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:29.474 13:29:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:29.474 ************************************ 00:10:29.474 END TEST skip_rpc_with_json 00:10:29.474 ************************************ 00:10:29.474 00:10:29.474 real 0m11.437s 00:10:29.474 user 0m10.854s 00:10:29.474 sys 0m0.933s 00:10:29.474 13:29:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.474 13:29:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:29.474 13:29:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:29.474 13:29:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:29.474 13:29:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.474 13:29:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.474 ************************************ 00:10:29.474 START TEST skip_rpc_with_delay 00:10:29.474 ************************************ 00:10:29.474 13:29:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:10:29.474 13:29:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:29.474 13:29:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:10:29.474 13:29:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:29.475 13:29:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:29.475 13:29:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:29.475 13:29:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:29.475 13:29:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:29.475 13:29:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:29.475 13:29:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:29.475 13:29:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:29.475 13:29:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:29.475 13:29:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:29.475 [2024-11-20 13:29:41.040101] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:29.475 ************************************ 00:10:29.475 END TEST skip_rpc_with_delay 00:10:29.475 ************************************ 00:10:29.475 13:29:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:10:29.475 13:29:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:29.475 13:29:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:29.475 13:29:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:29.475 00:10:29.475 real 0m0.208s 00:10:29.475 user 0m0.096s 00:10:29.475 sys 0m0.109s 00:10:29.475 13:29:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.475 13:29:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:29.475 13:29:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:29.475 13:29:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:29.475 13:29:41 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:29.475 13:29:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:29.475 13:29:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.475 13:29:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.475 ************************************ 00:10:29.475 START TEST exit_on_failed_rpc_init 00:10:29.475 ************************************ 00:10:29.475 13:29:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:10:29.475 13:29:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58483 00:10:29.475 13:29:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:29.475 13:29:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58483 00:10:29.475 13:29:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58483 ']' 00:10:29.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.475 13:29:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.475 13:29:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.475 13:29:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.475 13:29:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.475 13:29:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:29.475 [2024-11-20 13:29:41.341250] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:29.475 [2024-11-20 13:29:41.341437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58483 ] 00:10:29.800 [2024-11-20 13:29:41.538761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.800 [2024-11-20 13:29:41.665032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.760 13:29:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.760 13:29:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:10:30.760 13:29:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:30.760 13:29:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:30.760 13:29:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:10:30.760 13:29:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:30.760 13:29:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:30.760 13:29:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:30.760 13:29:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:30.760 13:29:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:30.760 13:29:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:30.760 13:29:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:30.760 13:29:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:30.760 13:29:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:30.760 13:29:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:30.760 [2024-11-20 13:29:42.623536] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:30.760 [2024-11-20 13:29:42.623911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58512 ] 00:10:31.018 [2024-11-20 13:29:42.806665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.018 [2024-11-20 13:29:42.926220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.018 [2024-11-20 13:29:42.926357] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:31.018 [2024-11-20 13:29:42.926375] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:31.018 [2024-11-20 13:29:42.926397] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:31.277 13:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:10:31.277 13:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:31.277 13:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:10:31.277 13:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:10:31.277 13:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:10:31.277 13:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:31.277 13:29:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:31.277 13:29:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58483 00:10:31.277 13:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58483 ']' 00:10:31.277 13:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58483 00:10:31.277 13:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:10:31.277 13:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.277 13:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58483 00:10:31.536 13:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.536 13:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.536 13:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58483' 00:10:31.536 killing process with pid 58483 00:10:31.536 13:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58483 00:10:31.536 13:29:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58483 00:10:34.100 00:10:34.100 real 0m4.474s 00:10:34.100 user 0m4.732s 00:10:34.100 sys 0m0.681s 00:10:34.100 13:29:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.100 ************************************ 00:10:34.100 END TEST exit_on_failed_rpc_init 00:10:34.100 ************************************ 00:10:34.100 13:29:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:34.100 13:29:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:34.100 ************************************ 00:10:34.100 END TEST skip_rpc 00:10:34.100 ************************************ 00:10:34.100 00:10:34.100 real 0m24.182s 00:10:34.100 user 0m22.887s 00:10:34.100 sys 0m2.489s 00:10:34.100 13:29:45 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.100 13:29:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.100 13:29:45 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:34.100 13:29:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:34.100 13:29:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.100 13:29:45 -- common/autotest_common.sh@10 -- # set +x 00:10:34.100 ************************************ 00:10:34.100 START TEST rpc_client 00:10:34.100 ************************************ 00:10:34.100 13:29:45 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:34.100 * Looking for test storage... 00:10:34.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:10:34.100 13:29:45 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:34.100 13:29:45 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:10:34.100 13:29:45 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:34.100 13:29:46 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@345 -- # : 1 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@353 -- # local d=1 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@355 -- # echo 1 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@353 -- # local d=2 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@355 -- # echo 2 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.100 13:29:46 rpc_client -- scripts/common.sh@368 -- # return 0 00:10:34.100 13:29:46 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.100 13:29:46 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:34.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.100 --rc genhtml_branch_coverage=1 00:10:34.100 --rc genhtml_function_coverage=1 00:10:34.100 --rc genhtml_legend=1 00:10:34.100 --rc geninfo_all_blocks=1 00:10:34.100 --rc geninfo_unexecuted_blocks=1 00:10:34.100 00:10:34.100 ' 00:10:34.100 13:29:46 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:34.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.100 --rc genhtml_branch_coverage=1 00:10:34.100 --rc genhtml_function_coverage=1 00:10:34.100 --rc genhtml_legend=1 00:10:34.100 --rc geninfo_all_blocks=1 00:10:34.100 --rc geninfo_unexecuted_blocks=1 00:10:34.100 00:10:34.100 ' 00:10:34.100 13:29:46 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:34.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.101 --rc genhtml_branch_coverage=1 00:10:34.101 --rc genhtml_function_coverage=1 00:10:34.101 --rc genhtml_legend=1 00:10:34.101 --rc geninfo_all_blocks=1 00:10:34.101 --rc geninfo_unexecuted_blocks=1 00:10:34.101 00:10:34.101 ' 00:10:34.101 13:29:46 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:34.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.101 --rc genhtml_branch_coverage=1 00:10:34.101 --rc genhtml_function_coverage=1 00:10:34.101 --rc genhtml_legend=1 00:10:34.101 --rc geninfo_all_blocks=1 00:10:34.101 --rc geninfo_unexecuted_blocks=1 00:10:34.101 00:10:34.101 ' 00:10:34.101 13:29:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:10:34.359 OK 00:10:34.359 13:29:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:34.359 ************************************ 00:10:34.359 END TEST rpc_client 00:10:34.359 ************************************ 00:10:34.359 00:10:34.359 real 0m0.319s 00:10:34.359 user 0m0.164s 00:10:34.359 sys 0m0.170s 00:10:34.359 13:29:46 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.359 13:29:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:34.359 13:29:46 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:34.359 13:29:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:34.359 13:29:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.359 13:29:46 -- common/autotest_common.sh@10 -- # set +x 00:10:34.359 ************************************ 00:10:34.359 START TEST json_config 00:10:34.359 ************************************ 00:10:34.359 13:29:46 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:34.359 13:29:46 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:34.359 13:29:46 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:10:34.359 13:29:46 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:34.619 13:29:46 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:34.619 13:29:46 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.619 13:29:46 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.619 13:29:46 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.619 13:29:46 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.619 13:29:46 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.619 13:29:46 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.619 13:29:46 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.619 13:29:46 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.619 13:29:46 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.619 13:29:46 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.619 13:29:46 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.619 13:29:46 json_config -- scripts/common.sh@344 -- # case "$op" in 00:10:34.619 13:29:46 json_config -- scripts/common.sh@345 -- # : 1 00:10:34.619 13:29:46 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.620 13:29:46 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.620 13:29:46 json_config -- scripts/common.sh@365 -- # decimal 1 00:10:34.620 13:29:46 json_config -- scripts/common.sh@353 -- # local d=1 00:10:34.620 13:29:46 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.620 13:29:46 json_config -- scripts/common.sh@355 -- # echo 1 00:10:34.620 13:29:46 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.620 13:29:46 json_config -- scripts/common.sh@366 -- # decimal 2 00:10:34.620 13:29:46 json_config -- scripts/common.sh@353 -- # local d=2 00:10:34.620 13:29:46 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.620 13:29:46 json_config -- scripts/common.sh@355 -- # echo 2 00:10:34.620 13:29:46 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.620 13:29:46 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.620 13:29:46 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.620 13:29:46 json_config -- scripts/common.sh@368 -- # return 0 00:10:34.620 13:29:46 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.620 13:29:46 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:34.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.620 --rc genhtml_branch_coverage=1 00:10:34.620 --rc genhtml_function_coverage=1 00:10:34.620 --rc genhtml_legend=1 00:10:34.620 --rc geninfo_all_blocks=1 00:10:34.620 --rc geninfo_unexecuted_blocks=1 00:10:34.620 00:10:34.620 ' 00:10:34.620 13:29:46 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:34.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.620 --rc genhtml_branch_coverage=1 00:10:34.620 --rc genhtml_function_coverage=1 00:10:34.620 --rc genhtml_legend=1 00:10:34.620 --rc geninfo_all_blocks=1 00:10:34.620 --rc geninfo_unexecuted_blocks=1 00:10:34.620 00:10:34.620 ' 00:10:34.620 13:29:46 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:34.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.620 --rc genhtml_branch_coverage=1 00:10:34.620 --rc genhtml_function_coverage=1 00:10:34.620 --rc genhtml_legend=1 00:10:34.620 --rc geninfo_all_blocks=1 00:10:34.620 --rc geninfo_unexecuted_blocks=1 00:10:34.620 00:10:34.620 ' 00:10:34.620 13:29:46 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:34.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.620 --rc genhtml_branch_coverage=1 00:10:34.620 --rc genhtml_function_coverage=1 00:10:34.620 --rc genhtml_legend=1 00:10:34.620 --rc geninfo_all_blocks=1 00:10:34.620 --rc geninfo_unexecuted_blocks=1 00:10:34.620 00:10:34.620 ' 00:10:34.620 13:29:46 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9631dc76-024e-47d8-ab58-2f4e4cd41f29 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=9631dc76-024e-47d8-ab58-2f4e4cd41f29 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:34.620 13:29:46 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.620 13:29:46 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.620 13:29:46 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.620 13:29:46 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.620 13:29:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.620 13:29:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.620 13:29:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.620 13:29:46 json_config -- paths/export.sh@5 -- # export PATH 00:10:34.620 13:29:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@51 -- # : 0 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.620 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.620 13:29:46 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.620 13:29:46 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:34.620 13:29:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:34.620 13:29:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:34.620 13:29:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:34.620 13:29:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:34.620 13:29:46 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:10:34.620 WARNING: No tests are enabled so not running JSON configuration tests 00:10:34.620 13:29:46 json_config -- json_config/json_config.sh@28 -- # exit 0 00:10:34.620 00:10:34.620 real 0m0.231s 00:10:34.620 user 0m0.127s 00:10:34.620 sys 0m0.104s 00:10:34.621 13:29:46 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.621 13:29:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:34.621 ************************************ 00:10:34.621 END TEST json_config 00:10:34.621 ************************************ 00:10:34.621 13:29:46 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:34.621 13:29:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:34.621 13:29:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.621 13:29:46 -- common/autotest_common.sh@10 -- # set +x 00:10:34.621 ************************************ 00:10:34.621 START TEST json_config_extra_key 00:10:34.621 ************************************ 00:10:34.621 13:29:46 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:34.621 13:29:46 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:34.880 13:29:46 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:10:34.880 13:29:46 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:34.880 13:29:46 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:10:34.880 13:29:46 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.880 13:29:46 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:34.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.880 --rc genhtml_branch_coverage=1 00:10:34.880 --rc genhtml_function_coverage=1 00:10:34.880 --rc genhtml_legend=1 00:10:34.880 --rc geninfo_all_blocks=1 00:10:34.880 --rc geninfo_unexecuted_blocks=1 00:10:34.880 00:10:34.880 ' 00:10:34.880 13:29:46 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:34.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.880 --rc genhtml_branch_coverage=1 00:10:34.880 --rc genhtml_function_coverage=1 00:10:34.880 --rc genhtml_legend=1 00:10:34.880 --rc geninfo_all_blocks=1 00:10:34.880 --rc geninfo_unexecuted_blocks=1 00:10:34.880 00:10:34.880 ' 00:10:34.880 13:29:46 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:34.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.880 --rc genhtml_branch_coverage=1 00:10:34.880 --rc genhtml_function_coverage=1 00:10:34.880 --rc genhtml_legend=1 00:10:34.880 --rc geninfo_all_blocks=1 00:10:34.880 --rc geninfo_unexecuted_blocks=1 00:10:34.880 00:10:34.880 ' 00:10:34.880 13:29:46 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:34.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.880 --rc genhtml_branch_coverage=1 00:10:34.880 --rc genhtml_function_coverage=1 00:10:34.880 --rc genhtml_legend=1 00:10:34.880 --rc geninfo_all_blocks=1 00:10:34.880 --rc geninfo_unexecuted_blocks=1 00:10:34.880 00:10:34.880 ' 00:10:34.880 13:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:34.880 13:29:46 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:34.880 13:29:46 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.880 13:29:46 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.880 13:29:46 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.880 13:29:46 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.880 13:29:46 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.880 13:29:46 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.880 13:29:46 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.880 13:29:46 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.880 13:29:46 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.880 13:29:46 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.880 13:29:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9631dc76-024e-47d8-ab58-2f4e4cd41f29 00:10:34.880 13:29:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=9631dc76-024e-47d8-ab58-2f4e4cd41f29 00:10:34.880 13:29:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.880 13:29:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.880 13:29:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:34.880 13:29:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.880 13:29:46 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.880 13:29:46 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.880 13:29:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.880 13:29:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.880 13:29:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.880 13:29:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:34.881 13:29:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.881 13:29:46 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:10:34.881 13:29:46 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.881 13:29:46 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.881 13:29:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.881 13:29:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.881 13:29:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.881 13:29:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.881 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.881 13:29:46 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.881 13:29:46 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.881 13:29:46 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.881 13:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:34.881 13:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:34.881 13:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:34.881 13:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:34.881 13:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:34.881 13:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:34.881 13:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:34.881 13:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:34.881 13:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:34.881 13:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:34.881 INFO: launching applications... 00:10:34.881 13:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:34.881 13:29:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:34.881 13:29:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:34.881 13:29:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:34.881 13:29:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:34.881 13:29:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:34.881 13:29:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:34.881 13:29:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:34.881 13:29:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:34.881 13:29:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58722 00:10:34.881 13:29:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:34.881 Waiting for target to run... 00:10:34.881 13:29:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58722 /var/tmp/spdk_tgt.sock 00:10:34.881 13:29:46 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58722 ']' 00:10:34.881 13:29:46 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:34.881 13:29:46 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:34.881 13:29:46 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:34.881 13:29:46 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:34.881 13:29:46 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.881 13:29:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:34.881 [2024-11-20 13:29:46.833004] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:34.881 [2024-11-20 13:29:46.833157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58722 ] 00:10:35.448 [2024-11-20 13:29:47.238772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.448 [2024-11-20 13:29:47.351910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.386 13:29:48 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.386 00:10:36.386 13:29:48 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:10:36.386 13:29:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:36.386 INFO: shutting down applications... 00:10:36.386 13:29:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:36.386 13:29:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:36.386 13:29:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:36.386 13:29:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:36.386 13:29:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58722 ]] 00:10:36.386 13:29:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58722 00:10:36.386 13:29:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:36.386 13:29:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:36.386 13:29:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58722 00:10:36.386 13:29:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:36.644 13:29:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:36.644 13:29:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:36.644 13:29:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58722 00:10:36.644 13:29:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:37.212 13:29:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:37.212 13:29:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:37.212 13:29:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58722 00:10:37.212 13:29:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:37.780 13:29:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:37.780 13:29:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:37.780 13:29:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58722 00:10:37.780 13:29:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:38.349 13:29:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:38.349 13:29:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:38.349 13:29:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58722 00:10:38.349 13:29:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:38.916 13:29:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:38.916 13:29:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:38.916 13:29:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58722 00:10:38.916 13:29:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:39.175 13:29:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:39.175 13:29:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:39.175 SPDK target shutdown done 00:10:39.175 13:29:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58722 00:10:39.175 13:29:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:39.175 13:29:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:39.175 13:29:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:39.175 13:29:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:39.175 Success 00:10:39.175 13:29:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:39.175 ************************************ 00:10:39.175 END TEST json_config_extra_key 00:10:39.175 ************************************ 00:10:39.175 00:10:39.175 real 0m4.633s 00:10:39.175 user 0m4.186s 00:10:39.175 sys 0m0.606s 00:10:39.175 13:29:51 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.175 13:29:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:39.434 13:29:51 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:39.434 13:29:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:39.434 13:29:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.434 13:29:51 -- common/autotest_common.sh@10 -- # set +x 00:10:39.434 ************************************ 00:10:39.434 START TEST alias_rpc 00:10:39.434 ************************************ 00:10:39.434 13:29:51 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:39.434 * Looking for test storage... 00:10:39.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:39.434 13:29:51 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:39.434 13:29:51 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:39.434 13:29:51 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:39.434 13:29:51 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:39.434 13:29:51 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@345 -- # : 1 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.693 13:29:51 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:10:39.694 13:29:51 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:39.694 13:29:51 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:39.694 13:29:51 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:39.694 13:29:51 alias_rpc -- scripts/common.sh@368 -- # return 0 00:10:39.694 13:29:51 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.694 13:29:51 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:39.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.694 --rc genhtml_branch_coverage=1 00:10:39.694 --rc genhtml_function_coverage=1 00:10:39.694 --rc genhtml_legend=1 00:10:39.694 --rc geninfo_all_blocks=1 00:10:39.694 --rc geninfo_unexecuted_blocks=1 00:10:39.694 00:10:39.694 ' 00:10:39.694 13:29:51 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:39.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.694 --rc genhtml_branch_coverage=1 00:10:39.694 --rc genhtml_function_coverage=1 00:10:39.694 --rc genhtml_legend=1 00:10:39.694 --rc geninfo_all_blocks=1 00:10:39.694 --rc geninfo_unexecuted_blocks=1 00:10:39.694 00:10:39.694 ' 00:10:39.694 13:29:51 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:39.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.694 --rc genhtml_branch_coverage=1 00:10:39.694 --rc genhtml_function_coverage=1 00:10:39.694 --rc genhtml_legend=1 00:10:39.694 --rc geninfo_all_blocks=1 00:10:39.694 --rc geninfo_unexecuted_blocks=1 00:10:39.694 00:10:39.694 ' 00:10:39.694 13:29:51 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:39.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.694 --rc genhtml_branch_coverage=1 00:10:39.694 --rc genhtml_function_coverage=1 00:10:39.694 --rc genhtml_legend=1 00:10:39.694 --rc geninfo_all_blocks=1 00:10:39.694 --rc geninfo_unexecuted_blocks=1 00:10:39.694 00:10:39.694 ' 00:10:39.694 13:29:51 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:39.694 13:29:51 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58828 00:10:39.694 13:29:51 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:39.694 13:29:51 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58828 00:10:39.694 13:29:51 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58828 ']' 00:10:39.694 13:29:51 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.694 13:29:51 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.694 13:29:51 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.694 13:29:51 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.694 13:29:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.694 [2024-11-20 13:29:51.511724] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:39.694 [2024-11-20 13:29:51.511856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58828 ] 00:10:39.953 [2024-11-20 13:29:51.691670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.953 [2024-11-20 13:29:51.808081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.889 13:29:52 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.889 13:29:52 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:40.889 13:29:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:41.148 13:29:52 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58828 00:10:41.148 13:29:52 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58828 ']' 00:10:41.148 13:29:52 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58828 00:10:41.148 13:29:52 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:10:41.148 13:29:52 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.148 13:29:52 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58828 00:10:41.148 13:29:52 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.148 13:29:52 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.148 killing process with pid 58828 00:10:41.148 13:29:52 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58828' 00:10:41.148 13:29:52 alias_rpc -- common/autotest_common.sh@973 -- # kill 58828 00:10:41.148 13:29:52 alias_rpc -- common/autotest_common.sh@978 -- # wait 58828 00:10:43.683 00:10:43.683 real 0m4.160s 00:10:43.683 user 0m4.126s 00:10:43.683 sys 0m0.593s 00:10:43.683 13:29:55 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.683 13:29:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.683 ************************************ 00:10:43.683 END TEST alias_rpc 00:10:43.683 ************************************ 00:10:43.683 13:29:55 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:10:43.683 13:29:55 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:43.683 13:29:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:43.683 13:29:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.683 13:29:55 -- common/autotest_common.sh@10 -- # set +x 00:10:43.683 ************************************ 00:10:43.683 START TEST spdkcli_tcp 00:10:43.683 ************************************ 00:10:43.683 13:29:55 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:43.683 * Looking for test storage... 00:10:43.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:43.683 13:29:55 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:43.683 13:29:55 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:43.683 13:29:55 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:43.683 13:29:55 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:43.683 13:29:55 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.683 13:29:55 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.683 13:29:55 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.683 13:29:55 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.683 13:29:55 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.683 13:29:55 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.683 13:29:55 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.683 13:29:55 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.683 13:29:55 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.683 13:29:55 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.683 13:29:55 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.683 13:29:55 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:43.683 13:29:55 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:10:43.683 13:29:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.683 13:29:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.942 13:29:55 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:43.942 13:29:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:10:43.942 13:29:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.942 13:29:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:10:43.942 13:29:55 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.942 13:29:55 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:43.942 13:29:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:10:43.942 13:29:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.942 13:29:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:10:43.942 13:29:55 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.942 13:29:55 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.942 13:29:55 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.942 13:29:55 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:10:43.942 13:29:55 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.942 13:29:55 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:43.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.942 --rc genhtml_branch_coverage=1 00:10:43.942 --rc genhtml_function_coverage=1 00:10:43.942 --rc genhtml_legend=1 00:10:43.942 --rc geninfo_all_blocks=1 00:10:43.942 --rc geninfo_unexecuted_blocks=1 00:10:43.942 00:10:43.942 ' 00:10:43.942 13:29:55 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:43.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.942 --rc genhtml_branch_coverage=1 00:10:43.942 --rc genhtml_function_coverage=1 00:10:43.942 --rc genhtml_legend=1 00:10:43.942 --rc geninfo_all_blocks=1 00:10:43.942 --rc geninfo_unexecuted_blocks=1 00:10:43.942 00:10:43.942 ' 00:10:43.942 13:29:55 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:43.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.942 --rc genhtml_branch_coverage=1 00:10:43.942 --rc genhtml_function_coverage=1 00:10:43.942 --rc genhtml_legend=1 00:10:43.942 --rc geninfo_all_blocks=1 00:10:43.942 --rc geninfo_unexecuted_blocks=1 00:10:43.942 00:10:43.942 ' 00:10:43.942 13:29:55 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:43.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.942 --rc genhtml_branch_coverage=1 00:10:43.942 --rc genhtml_function_coverage=1 00:10:43.942 --rc genhtml_legend=1 00:10:43.942 --rc geninfo_all_blocks=1 00:10:43.942 --rc geninfo_unexecuted_blocks=1 00:10:43.942 00:10:43.942 ' 00:10:43.942 13:29:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:43.942 13:29:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:43.942 13:29:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:43.942 13:29:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:43.942 13:29:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:43.942 13:29:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:43.942 13:29:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:43.942 13:29:55 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:43.942 13:29:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:43.942 13:29:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58937 00:10:43.942 13:29:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:43.942 13:29:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58937 00:10:43.942 13:29:55 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58937 ']' 00:10:43.942 13:29:55 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.942 13:29:55 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.942 13:29:55 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.942 13:29:55 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.942 13:29:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:43.942 [2024-11-20 13:29:55.778393] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:43.942 [2024-11-20 13:29:55.778754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58937 ] 00:10:44.200 [2024-11-20 13:29:55.956995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:44.200 [2024-11-20 13:29:56.074361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.200 [2024-11-20 13:29:56.074377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.137 13:29:56 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.137 13:29:56 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:10:45.137 13:29:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58958 00:10:45.137 13:29:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:45.137 13:29:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:45.397 [ 00:10:45.397 "bdev_malloc_delete", 00:10:45.397 "bdev_malloc_create", 00:10:45.397 "bdev_null_resize", 00:10:45.397 "bdev_null_delete", 00:10:45.397 "bdev_null_create", 00:10:45.397 "bdev_nvme_cuse_unregister", 00:10:45.397 "bdev_nvme_cuse_register", 00:10:45.397 "bdev_opal_new_user", 00:10:45.397 "bdev_opal_set_lock_state", 00:10:45.397 "bdev_opal_delete", 00:10:45.397 "bdev_opal_get_info", 00:10:45.397 "bdev_opal_create", 00:10:45.397 "bdev_nvme_opal_revert", 00:10:45.397 "bdev_nvme_opal_init", 00:10:45.397 "bdev_nvme_send_cmd", 00:10:45.397 "bdev_nvme_set_keys", 00:10:45.397 "bdev_nvme_get_path_iostat", 00:10:45.397 "bdev_nvme_get_mdns_discovery_info", 00:10:45.397 "bdev_nvme_stop_mdns_discovery", 00:10:45.397 "bdev_nvme_start_mdns_discovery", 00:10:45.397 "bdev_nvme_set_multipath_policy", 00:10:45.397 "bdev_nvme_set_preferred_path", 00:10:45.397 "bdev_nvme_get_io_paths", 00:10:45.397 "bdev_nvme_remove_error_injection", 00:10:45.397 "bdev_nvme_add_error_injection", 00:10:45.397 "bdev_nvme_get_discovery_info", 00:10:45.397 "bdev_nvme_stop_discovery", 00:10:45.397 "bdev_nvme_start_discovery", 00:10:45.397 "bdev_nvme_get_controller_health_info", 00:10:45.397 "bdev_nvme_disable_controller", 00:10:45.397 "bdev_nvme_enable_controller", 00:10:45.397 "bdev_nvme_reset_controller", 00:10:45.397 "bdev_nvme_get_transport_statistics", 00:10:45.397 "bdev_nvme_apply_firmware", 00:10:45.397 "bdev_nvme_detach_controller", 00:10:45.397 "bdev_nvme_get_controllers", 00:10:45.397 "bdev_nvme_attach_controller", 00:10:45.397 "bdev_nvme_set_hotplug", 00:10:45.397 "bdev_nvme_set_options", 00:10:45.397 "bdev_passthru_delete", 00:10:45.397 "bdev_passthru_create", 00:10:45.397 "bdev_lvol_set_parent_bdev", 00:10:45.397 "bdev_lvol_set_parent", 00:10:45.397 "bdev_lvol_check_shallow_copy", 00:10:45.397 "bdev_lvol_start_shallow_copy", 00:10:45.397 "bdev_lvol_grow_lvstore", 00:10:45.397 "bdev_lvol_get_lvols", 00:10:45.397 "bdev_lvol_get_lvstores", 00:10:45.397 "bdev_lvol_delete", 00:10:45.397 "bdev_lvol_set_read_only", 00:10:45.397 "bdev_lvol_resize", 00:10:45.397 "bdev_lvol_decouple_parent", 00:10:45.397 "bdev_lvol_inflate", 00:10:45.397 "bdev_lvol_rename", 00:10:45.397 "bdev_lvol_clone_bdev", 00:10:45.397 "bdev_lvol_clone", 00:10:45.397 "bdev_lvol_snapshot", 00:10:45.397 "bdev_lvol_create", 00:10:45.397 "bdev_lvol_delete_lvstore", 00:10:45.397 "bdev_lvol_rename_lvstore", 00:10:45.397 "bdev_lvol_create_lvstore", 00:10:45.397 "bdev_raid_set_options", 00:10:45.397 "bdev_raid_remove_base_bdev", 00:10:45.397 "bdev_raid_add_base_bdev", 00:10:45.397 "bdev_raid_delete", 00:10:45.397 "bdev_raid_create", 00:10:45.397 "bdev_raid_get_bdevs", 00:10:45.397 "bdev_error_inject_error", 00:10:45.397 "bdev_error_delete", 00:10:45.397 "bdev_error_create", 00:10:45.397 "bdev_split_delete", 00:10:45.397 "bdev_split_create", 00:10:45.397 "bdev_delay_delete", 00:10:45.397 "bdev_delay_create", 00:10:45.397 "bdev_delay_update_latency", 00:10:45.397 "bdev_zone_block_delete", 00:10:45.397 "bdev_zone_block_create", 00:10:45.397 "blobfs_create", 00:10:45.397 "blobfs_detect", 00:10:45.397 "blobfs_set_cache_size", 00:10:45.397 "bdev_xnvme_delete", 00:10:45.397 "bdev_xnvme_create", 00:10:45.397 "bdev_aio_delete", 00:10:45.397 "bdev_aio_rescan", 00:10:45.397 "bdev_aio_create", 00:10:45.397 "bdev_ftl_set_property", 00:10:45.397 "bdev_ftl_get_properties", 00:10:45.397 "bdev_ftl_get_stats", 00:10:45.397 "bdev_ftl_unmap", 00:10:45.397 "bdev_ftl_unload", 00:10:45.397 "bdev_ftl_delete", 00:10:45.397 "bdev_ftl_load", 00:10:45.397 "bdev_ftl_create", 00:10:45.397 "bdev_virtio_attach_controller", 00:10:45.397 "bdev_virtio_scsi_get_devices", 00:10:45.397 "bdev_virtio_detach_controller", 00:10:45.397 "bdev_virtio_blk_set_hotplug", 00:10:45.397 "bdev_iscsi_delete", 00:10:45.397 "bdev_iscsi_create", 00:10:45.397 "bdev_iscsi_set_options", 00:10:45.397 "accel_error_inject_error", 00:10:45.397 "ioat_scan_accel_module", 00:10:45.397 "dsa_scan_accel_module", 00:10:45.397 "iaa_scan_accel_module", 00:10:45.397 "keyring_file_remove_key", 00:10:45.397 "keyring_file_add_key", 00:10:45.397 "keyring_linux_set_options", 00:10:45.397 "fsdev_aio_delete", 00:10:45.397 "fsdev_aio_create", 00:10:45.397 "iscsi_get_histogram", 00:10:45.397 "iscsi_enable_histogram", 00:10:45.397 "iscsi_set_options", 00:10:45.397 "iscsi_get_auth_groups", 00:10:45.397 "iscsi_auth_group_remove_secret", 00:10:45.397 "iscsi_auth_group_add_secret", 00:10:45.397 "iscsi_delete_auth_group", 00:10:45.397 "iscsi_create_auth_group", 00:10:45.397 "iscsi_set_discovery_auth", 00:10:45.397 "iscsi_get_options", 00:10:45.397 "iscsi_target_node_request_logout", 00:10:45.397 "iscsi_target_node_set_redirect", 00:10:45.397 "iscsi_target_node_set_auth", 00:10:45.397 "iscsi_target_node_add_lun", 00:10:45.397 "iscsi_get_stats", 00:10:45.397 "iscsi_get_connections", 00:10:45.397 "iscsi_portal_group_set_auth", 00:10:45.397 "iscsi_start_portal_group", 00:10:45.397 "iscsi_delete_portal_group", 00:10:45.397 "iscsi_create_portal_group", 00:10:45.397 "iscsi_get_portal_groups", 00:10:45.397 "iscsi_delete_target_node", 00:10:45.397 "iscsi_target_node_remove_pg_ig_maps", 00:10:45.397 "iscsi_target_node_add_pg_ig_maps", 00:10:45.397 "iscsi_create_target_node", 00:10:45.397 "iscsi_get_target_nodes", 00:10:45.397 "iscsi_delete_initiator_group", 00:10:45.397 "iscsi_initiator_group_remove_initiators", 00:10:45.397 "iscsi_initiator_group_add_initiators", 00:10:45.397 "iscsi_create_initiator_group", 00:10:45.397 "iscsi_get_initiator_groups", 00:10:45.397 "nvmf_set_crdt", 00:10:45.397 "nvmf_set_config", 00:10:45.397 "nvmf_set_max_subsystems", 00:10:45.397 "nvmf_stop_mdns_prr", 00:10:45.397 "nvmf_publish_mdns_prr", 00:10:45.397 "nvmf_subsystem_get_listeners", 00:10:45.397 "nvmf_subsystem_get_qpairs", 00:10:45.397 "nvmf_subsystem_get_controllers", 00:10:45.397 "nvmf_get_stats", 00:10:45.397 "nvmf_get_transports", 00:10:45.397 "nvmf_create_transport", 00:10:45.397 "nvmf_get_targets", 00:10:45.397 "nvmf_delete_target", 00:10:45.397 "nvmf_create_target", 00:10:45.397 "nvmf_subsystem_allow_any_host", 00:10:45.397 "nvmf_subsystem_set_keys", 00:10:45.397 "nvmf_subsystem_remove_host", 00:10:45.397 "nvmf_subsystem_add_host", 00:10:45.397 "nvmf_ns_remove_host", 00:10:45.397 "nvmf_ns_add_host", 00:10:45.397 "nvmf_subsystem_remove_ns", 00:10:45.398 "nvmf_subsystem_set_ns_ana_group", 00:10:45.398 "nvmf_subsystem_add_ns", 00:10:45.398 "nvmf_subsystem_listener_set_ana_state", 00:10:45.398 "nvmf_discovery_get_referrals", 00:10:45.398 "nvmf_discovery_remove_referral", 00:10:45.398 "nvmf_discovery_add_referral", 00:10:45.398 "nvmf_subsystem_remove_listener", 00:10:45.398 "nvmf_subsystem_add_listener", 00:10:45.398 "nvmf_delete_subsystem", 00:10:45.398 "nvmf_create_subsystem", 00:10:45.398 "nvmf_get_subsystems", 00:10:45.398 "env_dpdk_get_mem_stats", 00:10:45.398 "nbd_get_disks", 00:10:45.398 "nbd_stop_disk", 00:10:45.398 "nbd_start_disk", 00:10:45.398 "ublk_recover_disk", 00:10:45.398 "ublk_get_disks", 00:10:45.398 "ublk_stop_disk", 00:10:45.398 "ublk_start_disk", 00:10:45.398 "ublk_destroy_target", 00:10:45.398 "ublk_create_target", 00:10:45.398 "virtio_blk_create_transport", 00:10:45.398 "virtio_blk_get_transports", 00:10:45.398 "vhost_controller_set_coalescing", 00:10:45.398 "vhost_get_controllers", 00:10:45.398 "vhost_delete_controller", 00:10:45.398 "vhost_create_blk_controller", 00:10:45.398 "vhost_scsi_controller_remove_target", 00:10:45.398 "vhost_scsi_controller_add_target", 00:10:45.398 "vhost_start_scsi_controller", 00:10:45.398 "vhost_create_scsi_controller", 00:10:45.398 "thread_set_cpumask", 00:10:45.398 "scheduler_set_options", 00:10:45.398 "framework_get_governor", 00:10:45.398 "framework_get_scheduler", 00:10:45.398 "framework_set_scheduler", 00:10:45.398 "framework_get_reactors", 00:10:45.398 "thread_get_io_channels", 00:10:45.398 "thread_get_pollers", 00:10:45.398 "thread_get_stats", 00:10:45.398 "framework_monitor_context_switch", 00:10:45.398 "spdk_kill_instance", 00:10:45.398 "log_enable_timestamps", 00:10:45.398 "log_get_flags", 00:10:45.398 "log_clear_flag", 00:10:45.398 "log_set_flag", 00:10:45.398 "log_get_level", 00:10:45.398 "log_set_level", 00:10:45.398 "log_get_print_level", 00:10:45.398 "log_set_print_level", 00:10:45.398 "framework_enable_cpumask_locks", 00:10:45.398 "framework_disable_cpumask_locks", 00:10:45.398 "framework_wait_init", 00:10:45.398 "framework_start_init", 00:10:45.398 "scsi_get_devices", 00:10:45.398 "bdev_get_histogram", 00:10:45.398 "bdev_enable_histogram", 00:10:45.398 "bdev_set_qos_limit", 00:10:45.398 "bdev_set_qd_sampling_period", 00:10:45.398 "bdev_get_bdevs", 00:10:45.398 "bdev_reset_iostat", 00:10:45.398 "bdev_get_iostat", 00:10:45.398 "bdev_examine", 00:10:45.398 "bdev_wait_for_examine", 00:10:45.398 "bdev_set_options", 00:10:45.398 "accel_get_stats", 00:10:45.398 "accel_set_options", 00:10:45.398 "accel_set_driver", 00:10:45.398 "accel_crypto_key_destroy", 00:10:45.398 "accel_crypto_keys_get", 00:10:45.398 "accel_crypto_key_create", 00:10:45.398 "accel_assign_opc", 00:10:45.398 "accel_get_module_info", 00:10:45.398 "accel_get_opc_assignments", 00:10:45.398 "vmd_rescan", 00:10:45.398 "vmd_remove_device", 00:10:45.398 "vmd_enable", 00:10:45.398 "sock_get_default_impl", 00:10:45.398 "sock_set_default_impl", 00:10:45.398 "sock_impl_set_options", 00:10:45.398 "sock_impl_get_options", 00:10:45.398 "iobuf_get_stats", 00:10:45.398 "iobuf_set_options", 00:10:45.398 "keyring_get_keys", 00:10:45.398 "framework_get_pci_devices", 00:10:45.398 "framework_get_config", 00:10:45.398 "framework_get_subsystems", 00:10:45.398 "fsdev_set_opts", 00:10:45.398 "fsdev_get_opts", 00:10:45.398 "trace_get_info", 00:10:45.398 "trace_get_tpoint_group_mask", 00:10:45.398 "trace_disable_tpoint_group", 00:10:45.398 "trace_enable_tpoint_group", 00:10:45.398 "trace_clear_tpoint_mask", 00:10:45.398 "trace_set_tpoint_mask", 00:10:45.398 "notify_get_notifications", 00:10:45.398 "notify_get_types", 00:10:45.398 "spdk_get_version", 00:10:45.398 "rpc_get_methods" 00:10:45.398 ] 00:10:45.398 13:29:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:45.398 13:29:57 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:45.398 13:29:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:45.398 13:29:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:45.398 13:29:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58937 00:10:45.398 13:29:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58937 ']' 00:10:45.398 13:29:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58937 00:10:45.398 13:29:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:10:45.398 13:29:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.398 13:29:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58937 00:10:45.398 killing process with pid 58937 00:10:45.398 13:29:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.398 13:29:57 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.398 13:29:57 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58937' 00:10:45.398 13:29:57 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58937 00:10:45.398 13:29:57 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58937 00:10:47.970 ************************************ 00:10:47.970 END TEST spdkcli_tcp 00:10:47.970 ************************************ 00:10:47.970 00:10:47.970 real 0m4.292s 00:10:47.970 user 0m7.591s 00:10:47.970 sys 0m0.670s 00:10:47.970 13:29:59 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.970 13:29:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:47.970 13:29:59 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:47.970 13:29:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:47.970 13:29:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.970 13:29:59 -- common/autotest_common.sh@10 -- # set +x 00:10:47.970 ************************************ 00:10:47.970 START TEST dpdk_mem_utility 00:10:47.970 ************************************ 00:10:47.970 13:29:59 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:47.970 * Looking for test storage... 00:10:48.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:48.228 13:29:59 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:48.228 13:29:59 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:10:48.228 13:29:59 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:48.228 13:30:00 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.228 13:30:00 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:10:48.228 13:30:00 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.228 13:30:00 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:48.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.228 --rc genhtml_branch_coverage=1 00:10:48.228 --rc genhtml_function_coverage=1 00:10:48.228 --rc genhtml_legend=1 00:10:48.228 --rc geninfo_all_blocks=1 00:10:48.228 --rc geninfo_unexecuted_blocks=1 00:10:48.228 00:10:48.228 ' 00:10:48.228 13:30:00 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:48.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.228 --rc genhtml_branch_coverage=1 00:10:48.228 --rc genhtml_function_coverage=1 00:10:48.228 --rc genhtml_legend=1 00:10:48.228 --rc geninfo_all_blocks=1 00:10:48.228 --rc geninfo_unexecuted_blocks=1 00:10:48.228 00:10:48.228 ' 00:10:48.228 13:30:00 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:48.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.228 --rc genhtml_branch_coverage=1 00:10:48.228 --rc genhtml_function_coverage=1 00:10:48.228 --rc genhtml_legend=1 00:10:48.228 --rc geninfo_all_blocks=1 00:10:48.229 --rc geninfo_unexecuted_blocks=1 00:10:48.229 00:10:48.229 ' 00:10:48.229 13:30:00 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:48.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.229 --rc genhtml_branch_coverage=1 00:10:48.229 --rc genhtml_function_coverage=1 00:10:48.229 --rc genhtml_legend=1 00:10:48.229 --rc geninfo_all_blocks=1 00:10:48.229 --rc geninfo_unexecuted_blocks=1 00:10:48.229 00:10:48.229 ' 00:10:48.229 13:30:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:48.229 13:30:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59063 00:10:48.229 13:30:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59063 00:10:48.229 13:30:00 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59063 ']' 00:10:48.229 13:30:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:48.229 13:30:00 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.229 13:30:00 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.229 13:30:00 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.229 13:30:00 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.229 13:30:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:48.229 [2024-11-20 13:30:00.149092] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:48.229 [2024-11-20 13:30:00.149716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59063 ] 00:10:48.485 [2024-11-20 13:30:00.336780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.743 [2024-11-20 13:30:00.457946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.681 13:30:01 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.681 13:30:01 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:10:49.681 13:30:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:49.681 13:30:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:49.681 13:30:01 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.681 13:30:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:49.681 { 00:10:49.681 "filename": "/tmp/spdk_mem_dump.txt" 00:10:49.681 } 00:10:49.681 13:30:01 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.681 13:30:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:49.681 DPDK memory size 824.000000 MiB in 1 heap(s) 00:10:49.681 1 heaps totaling size 824.000000 MiB 00:10:49.681 size: 824.000000 MiB heap id: 0 00:10:49.681 end heaps---------- 00:10:49.681 9 mempools totaling size 603.782043 MiB 00:10:49.681 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:49.681 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:49.681 size: 100.555481 MiB name: bdev_io_59063 00:10:49.681 size: 50.003479 MiB name: msgpool_59063 00:10:49.681 size: 36.509338 MiB name: fsdev_io_59063 00:10:49.681 size: 21.763794 MiB name: PDU_Pool 00:10:49.681 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:49.681 size: 4.133484 MiB name: evtpool_59063 00:10:49.681 size: 0.026123 MiB name: Session_Pool 00:10:49.681 end mempools------- 00:10:49.681 6 memzones totaling size 4.142822 MiB 00:10:49.681 size: 1.000366 MiB name: RG_ring_0_59063 00:10:49.681 size: 1.000366 MiB name: RG_ring_1_59063 00:10:49.681 size: 1.000366 MiB name: RG_ring_4_59063 00:10:49.681 size: 1.000366 MiB name: RG_ring_5_59063 00:10:49.681 size: 0.125366 MiB name: RG_ring_2_59063 00:10:49.681 size: 0.015991 MiB name: RG_ring_3_59063 00:10:49.681 end memzones------- 00:10:49.681 13:30:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:49.681 heap id: 0 total size: 824.000000 MiB number of busy elements: 323 number of free elements: 18 00:10:49.681 list of free elements. size: 16.779419 MiB 00:10:49.681 element at address: 0x200006400000 with size: 1.995972 MiB 00:10:49.681 element at address: 0x20000a600000 with size: 1.995972 MiB 00:10:49.681 element at address: 0x200003e00000 with size: 1.991028 MiB 00:10:49.681 element at address: 0x200019500040 with size: 0.999939 MiB 00:10:49.681 element at address: 0x200019900040 with size: 0.999939 MiB 00:10:49.681 element at address: 0x200019a00000 with size: 0.999084 MiB 00:10:49.681 element at address: 0x200032600000 with size: 0.994324 MiB 00:10:49.681 element at address: 0x200000400000 with size: 0.992004 MiB 00:10:49.681 element at address: 0x200019200000 with size: 0.959656 MiB 00:10:49.681 element at address: 0x200019d00040 with size: 0.936401 MiB 00:10:49.681 element at address: 0x200000200000 with size: 0.716980 MiB 00:10:49.681 element at address: 0x20001b400000 with size: 0.560730 MiB 00:10:49.681 element at address: 0x200000c00000 with size: 0.489197 MiB 00:10:49.681 element at address: 0x200019600000 with size: 0.487976 MiB 00:10:49.681 element at address: 0x200019e00000 with size: 0.485413 MiB 00:10:49.681 element at address: 0x200012c00000 with size: 0.433472 MiB 00:10:49.681 element at address: 0x200028800000 with size: 0.390442 MiB 00:10:49.681 element at address: 0x200000800000 with size: 0.350891 MiB 00:10:49.681 list of standard malloc elements. size: 199.289673 MiB 00:10:49.681 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:10:49.681 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:10:49.681 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:10:49.681 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:10:49.681 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:10:49.681 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:10:49.681 element at address: 0x200019deff40 with size: 0.062683 MiB 00:10:49.681 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:10:49.681 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:10:49.681 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:10:49.681 element at address: 0x200012bff040 with size: 0.000305 MiB 00:10:49.681 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:10:49.681 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:10:49.681 element at address: 0x200000cff000 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:10:49.681 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012bff180 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012bff280 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012bff380 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012bff480 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012bff580 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012bff680 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012bff780 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012bff880 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012bff980 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200019affc40 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200028863f40 with size: 0.000244 MiB 00:10:49.682 element at address: 0x200028864040 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20002886af80 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20002886b080 with size: 0.000244 MiB 00:10:49.682 element at address: 0x20002886b180 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886b280 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886b380 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886b480 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886b580 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886b680 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886b780 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886b880 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886b980 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886be80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886c080 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886c180 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886c280 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886c380 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886c480 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886c580 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886c680 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886c780 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886c880 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886c980 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886d080 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886d180 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886d280 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886d380 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886d480 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886d580 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886d680 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886d780 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886d880 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886d980 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886da80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886db80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886de80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886df80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886e080 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886e180 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886e280 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886e380 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886e480 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886e580 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886e680 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886e780 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886e880 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886e980 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886f080 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886f180 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886f280 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886f380 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886f480 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886f580 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886f680 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886f780 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886f880 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886f980 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:10:49.683 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:10:49.683 list of memzone associated elements. size: 607.930908 MiB 00:10:49.683 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:10:49.683 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:49.683 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:10:49.683 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:49.683 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:10:49.683 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59063_0 00:10:49.683 element at address: 0x200000dff340 with size: 48.003113 MiB 00:10:49.683 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59063_0 00:10:49.683 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:10:49.683 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59063_0 00:10:49.683 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:10:49.683 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:49.683 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:10:49.683 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:49.683 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:10:49.683 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59063_0 00:10:49.683 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:10:49.683 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59063 00:10:49.683 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:10:49.683 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59063 00:10:49.683 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:10:49.683 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:49.683 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:10:49.683 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:49.683 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:10:49.683 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:49.683 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:10:49.683 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:49.683 element at address: 0x200000cff100 with size: 1.000549 MiB 00:10:49.683 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59063 00:10:49.683 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:10:49.683 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59063 00:10:49.683 element at address: 0x200019affd40 with size: 1.000549 MiB 00:10:49.683 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59063 00:10:49.683 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:10:49.683 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59063 00:10:49.683 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:10:49.683 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59063 00:10:49.683 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:10:49.683 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59063 00:10:49.683 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:10:49.683 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:49.683 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:10:49.683 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:49.683 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:10:49.683 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:49.683 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:10:49.683 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59063 00:10:49.683 element at address: 0x20000085df80 with size: 0.125549 MiB 00:10:49.683 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59063 00:10:49.683 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:10:49.683 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:49.683 element at address: 0x200028864140 with size: 0.023804 MiB 00:10:49.683 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:49.683 element at address: 0x200000859d40 with size: 0.016174 MiB 00:10:49.684 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59063 00:10:49.684 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:10:49.684 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:49.684 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:10:49.684 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59063 00:10:49.684 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:10:49.684 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59063 00:10:49.684 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:10:49.684 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59063 00:10:49.684 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:10:49.684 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:49.684 13:30:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:49.684 13:30:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59063 00:10:49.684 13:30:01 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59063 ']' 00:10:49.684 13:30:01 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59063 00:10:49.684 13:30:01 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:10:49.684 13:30:01 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.684 13:30:01 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59063 00:10:49.684 killing process with pid 59063 00:10:49.684 13:30:01 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.684 13:30:01 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.684 13:30:01 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59063' 00:10:49.684 13:30:01 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59063 00:10:49.684 13:30:01 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59063 00:10:52.232 00:10:52.232 real 0m4.197s 00:10:52.232 user 0m4.118s 00:10:52.232 sys 0m0.603s 00:10:52.232 13:30:03 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.232 ************************************ 00:10:52.232 END TEST dpdk_mem_utility 00:10:52.232 ************************************ 00:10:52.232 13:30:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:52.232 13:30:04 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:52.232 13:30:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:52.232 13:30:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.232 13:30:04 -- common/autotest_common.sh@10 -- # set +x 00:10:52.232 ************************************ 00:10:52.232 START TEST event 00:10:52.232 ************************************ 00:10:52.232 13:30:04 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:52.232 * Looking for test storage... 00:10:52.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:52.232 13:30:04 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:52.232 13:30:04 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:52.232 13:30:04 event -- common/autotest_common.sh@1693 -- # lcov --version 00:10:52.491 13:30:04 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:52.491 13:30:04 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.491 13:30:04 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.491 13:30:04 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.491 13:30:04 event -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.491 13:30:04 event -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.491 13:30:04 event -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.491 13:30:04 event -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.491 13:30:04 event -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.491 13:30:04 event -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.491 13:30:04 event -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.491 13:30:04 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.491 13:30:04 event -- scripts/common.sh@344 -- # case "$op" in 00:10:52.491 13:30:04 event -- scripts/common.sh@345 -- # : 1 00:10:52.491 13:30:04 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.491 13:30:04 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.491 13:30:04 event -- scripts/common.sh@365 -- # decimal 1 00:10:52.492 13:30:04 event -- scripts/common.sh@353 -- # local d=1 00:10:52.492 13:30:04 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.492 13:30:04 event -- scripts/common.sh@355 -- # echo 1 00:10:52.492 13:30:04 event -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.492 13:30:04 event -- scripts/common.sh@366 -- # decimal 2 00:10:52.492 13:30:04 event -- scripts/common.sh@353 -- # local d=2 00:10:52.492 13:30:04 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.492 13:30:04 event -- scripts/common.sh@355 -- # echo 2 00:10:52.492 13:30:04 event -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.492 13:30:04 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.492 13:30:04 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.492 13:30:04 event -- scripts/common.sh@368 -- # return 0 00:10:52.492 13:30:04 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.492 13:30:04 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:52.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.492 --rc genhtml_branch_coverage=1 00:10:52.492 --rc genhtml_function_coverage=1 00:10:52.492 --rc genhtml_legend=1 00:10:52.492 --rc geninfo_all_blocks=1 00:10:52.492 --rc geninfo_unexecuted_blocks=1 00:10:52.492 00:10:52.492 ' 00:10:52.492 13:30:04 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:52.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.492 --rc genhtml_branch_coverage=1 00:10:52.492 --rc genhtml_function_coverage=1 00:10:52.492 --rc genhtml_legend=1 00:10:52.492 --rc geninfo_all_blocks=1 00:10:52.492 --rc geninfo_unexecuted_blocks=1 00:10:52.492 00:10:52.492 ' 00:10:52.492 13:30:04 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:52.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.492 --rc genhtml_branch_coverage=1 00:10:52.492 --rc genhtml_function_coverage=1 00:10:52.492 --rc genhtml_legend=1 00:10:52.492 --rc geninfo_all_blocks=1 00:10:52.492 --rc geninfo_unexecuted_blocks=1 00:10:52.492 00:10:52.492 ' 00:10:52.492 13:30:04 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:52.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.492 --rc genhtml_branch_coverage=1 00:10:52.492 --rc genhtml_function_coverage=1 00:10:52.492 --rc genhtml_legend=1 00:10:52.492 --rc geninfo_all_blocks=1 00:10:52.492 --rc geninfo_unexecuted_blocks=1 00:10:52.492 00:10:52.492 ' 00:10:52.492 13:30:04 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:52.492 13:30:04 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:52.492 13:30:04 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:52.492 13:30:04 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:52.492 13:30:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.492 13:30:04 event -- common/autotest_common.sh@10 -- # set +x 00:10:52.492 ************************************ 00:10:52.492 START TEST event_perf 00:10:52.492 ************************************ 00:10:52.492 13:30:04 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:52.492 Running I/O for 1 seconds...[2024-11-20 13:30:04.303554] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:52.492 [2024-11-20 13:30:04.303803] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59171 ] 00:10:52.751 [2024-11-20 13:30:04.502911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.751 [2024-11-20 13:30:04.631634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.751 [2024-11-20 13:30:04.631768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.751 Running I/O for 1 seconds...[2024-11-20 13:30:04.631906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.751 [2024-11-20 13:30:04.631995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.123 00:10:54.123 lcore 0: 201917 00:10:54.123 lcore 1: 201921 00:10:54.123 lcore 2: 201925 00:10:54.123 lcore 3: 201913 00:10:54.123 done. 00:10:54.123 00:10:54.123 real 0m1.625s 00:10:54.123 user 0m4.362s 00:10:54.123 sys 0m0.136s 00:10:54.123 13:30:05 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.123 13:30:05 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:54.123 ************************************ 00:10:54.123 END TEST event_perf 00:10:54.123 ************************************ 00:10:54.123 13:30:05 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:54.123 13:30:05 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:54.123 13:30:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.123 13:30:05 event -- common/autotest_common.sh@10 -- # set +x 00:10:54.123 ************************************ 00:10:54.123 START TEST event_reactor 00:10:54.123 ************************************ 00:10:54.123 13:30:05 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:54.123 [2024-11-20 13:30:05.998728] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:54.123 [2024-11-20 13:30:05.999112] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59210 ] 00:10:54.380 [2024-11-20 13:30:06.183218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.380 [2024-11-20 13:30:06.300316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.752 test_start 00:10:55.752 oneshot 00:10:55.752 tick 100 00:10:55.752 tick 100 00:10:55.752 tick 250 00:10:55.752 tick 100 00:10:55.752 tick 100 00:10:55.752 tick 100 00:10:55.752 tick 250 00:10:55.752 tick 500 00:10:55.752 tick 100 00:10:55.752 tick 100 00:10:55.752 tick 250 00:10:55.752 tick 100 00:10:55.752 tick 100 00:10:55.752 test_end 00:10:55.752 00:10:55.752 real 0m1.590s 00:10:55.752 user 0m1.367s 00:10:55.752 sys 0m0.113s 00:10:55.752 13:30:07 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.752 13:30:07 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:55.752 ************************************ 00:10:55.752 END TEST event_reactor 00:10:55.752 ************************************ 00:10:55.752 13:30:07 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:55.752 13:30:07 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:55.752 13:30:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.752 13:30:07 event -- common/autotest_common.sh@10 -- # set +x 00:10:55.752 ************************************ 00:10:55.752 START TEST event_reactor_perf 00:10:55.752 ************************************ 00:10:55.752 13:30:07 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:55.752 [2024-11-20 13:30:07.656105] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:55.752 [2024-11-20 13:30:07.656245] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59247 ] 00:10:56.015 [2024-11-20 13:30:07.840218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.015 [2024-11-20 13:30:07.961583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.402 test_start 00:10:57.403 test_end 00:10:57.403 Performance: 362299 events per second 00:10:57.403 00:10:57.403 real 0m1.598s 00:10:57.403 user 0m1.360s 00:10:57.403 sys 0m0.127s 00:10:57.403 13:30:09 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.403 ************************************ 00:10:57.403 END TEST event_reactor_perf 00:10:57.403 ************************************ 00:10:57.403 13:30:09 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:57.403 13:30:09 event -- event/event.sh@49 -- # uname -s 00:10:57.403 13:30:09 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:57.403 13:30:09 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:57.403 13:30:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:57.403 13:30:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.403 13:30:09 event -- common/autotest_common.sh@10 -- # set +x 00:10:57.403 ************************************ 00:10:57.403 START TEST event_scheduler 00:10:57.403 ************************************ 00:10:57.403 13:30:09 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:57.661 * Looking for test storage... 00:10:57.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:57.661 13:30:09 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:57.661 13:30:09 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:10:57.661 13:30:09 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:57.661 13:30:09 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:57.661 13:30:09 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.661 13:30:09 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.661 13:30:09 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.661 13:30:09 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.661 13:30:09 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.661 13:30:09 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.661 13:30:09 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.661 13:30:09 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.661 13:30:09 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.661 13:30:09 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.661 13:30:09 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.661 13:30:09 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:10:57.661 13:30:09 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:10:57.661 13:30:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.661 13:30:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.661 13:30:09 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:10:57.662 13:30:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:10:57.662 13:30:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.662 13:30:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:10:57.662 13:30:09 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.662 13:30:09 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:10:57.662 13:30:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:10:57.662 13:30:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.662 13:30:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:10:57.662 13:30:09 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.662 13:30:09 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.662 13:30:09 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.662 13:30:09 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:10:57.662 13:30:09 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.662 13:30:09 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:57.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.662 --rc genhtml_branch_coverage=1 00:10:57.662 --rc genhtml_function_coverage=1 00:10:57.662 --rc genhtml_legend=1 00:10:57.662 --rc geninfo_all_blocks=1 00:10:57.662 --rc geninfo_unexecuted_blocks=1 00:10:57.662 00:10:57.662 ' 00:10:57.662 13:30:09 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:57.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.662 --rc genhtml_branch_coverage=1 00:10:57.662 --rc genhtml_function_coverage=1 00:10:57.662 --rc genhtml_legend=1 00:10:57.662 --rc geninfo_all_blocks=1 00:10:57.662 --rc geninfo_unexecuted_blocks=1 00:10:57.662 00:10:57.662 ' 00:10:57.662 13:30:09 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:57.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.662 --rc genhtml_branch_coverage=1 00:10:57.662 --rc genhtml_function_coverage=1 00:10:57.662 --rc genhtml_legend=1 00:10:57.662 --rc geninfo_all_blocks=1 00:10:57.662 --rc geninfo_unexecuted_blocks=1 00:10:57.662 00:10:57.662 ' 00:10:57.662 13:30:09 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:57.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.662 --rc genhtml_branch_coverage=1 00:10:57.662 --rc genhtml_function_coverage=1 00:10:57.662 --rc genhtml_legend=1 00:10:57.662 --rc geninfo_all_blocks=1 00:10:57.662 --rc geninfo_unexecuted_blocks=1 00:10:57.662 00:10:57.662 ' 00:10:57.662 13:30:09 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:57.662 13:30:09 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59323 00:10:57.662 13:30:09 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:57.662 13:30:09 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:57.662 13:30:09 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59323 00:10:57.662 13:30:09 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59323 ']' 00:10:57.662 13:30:09 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.662 13:30:09 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.662 13:30:09 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.662 13:30:09 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.662 13:30:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:57.662 [2024-11-20 13:30:09.604229] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:57.662 [2024-11-20 13:30:09.604595] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59323 ] 00:10:57.920 [2024-11-20 13:30:09.788793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.178 [2024-11-20 13:30:09.916110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.178 [2024-11-20 13:30:09.916163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.178 [2024-11-20 13:30:09.916237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.178 [2024-11-20 13:30:09.916261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.745 13:30:10 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.745 13:30:10 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:10:58.745 13:30:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:58.745 13:30:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.745 13:30:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:58.745 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:58.745 POWER: Cannot set governor of lcore 0 to userspace 00:10:58.745 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:58.745 POWER: Cannot set governor of lcore 0 to performance 00:10:58.745 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:58.745 POWER: Cannot set governor of lcore 0 to userspace 00:10:58.745 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:58.745 POWER: Cannot set governor of lcore 0 to userspace 00:10:58.745 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:10:58.745 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:58.745 POWER: Unable to set Power Management Environment for lcore 0 00:10:58.745 [2024-11-20 13:30:10.498049] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:10:58.745 [2024-11-20 13:30:10.498109] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:10:58.745 [2024-11-20 13:30:10.498210] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:10:58.745 [2024-11-20 13:30:10.498268] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:58.745 [2024-11-20 13:30:10.498431] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:58.745 [2024-11-20 13:30:10.498500] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:58.745 13:30:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.745 13:30:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:58.745 13:30:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.745 13:30:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:59.004 [2024-11-20 13:30:10.854697] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:59.004 13:30:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.004 13:30:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:59.004 13:30:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:59.004 13:30:10 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.004 13:30:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:59.004 ************************************ 00:10:59.004 START TEST scheduler_create_thread 00:10:59.004 ************************************ 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.004 2 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.004 3 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.004 4 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.004 5 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.004 6 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.004 7 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.004 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.264 8 00:10:59.264 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.264 13:30:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:59.264 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.264 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.264 9 00:10:59.264 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.264 13:30:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:59.264 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.264 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.264 10 00:10:59.264 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.264 13:30:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:59.264 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.264 13:30:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:00.642 13:30:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.642 13:30:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:00.642 13:30:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:00.642 13:30:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.642 13:30:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:01.210 13:30:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.210 13:30:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:01.210 13:30:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.210 13:30:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:02.145 13:30:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.145 13:30:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:02.145 13:30:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:02.145 13:30:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.145 13:30:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:03.083 ************************************ 00:11:03.083 END TEST scheduler_create_thread 00:11:03.083 ************************************ 00:11:03.083 13:30:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.083 00:11:03.083 real 0m3.885s 00:11:03.083 user 0m0.022s 00:11:03.083 sys 0m0.011s 00:11:03.083 13:30:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.083 13:30:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:03.083 13:30:14 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:03.083 13:30:14 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59323 00:11:03.083 13:30:14 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59323 ']' 00:11:03.083 13:30:14 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59323 00:11:03.083 13:30:14 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:11:03.083 13:30:14 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.083 13:30:14 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59323 00:11:03.083 killing process with pid 59323 00:11:03.083 13:30:14 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:03.083 13:30:14 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:03.083 13:30:14 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59323' 00:11:03.083 13:30:14 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59323 00:11:03.084 13:30:14 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59323 00:11:03.348 [2024-11-20 13:30:15.135524] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:04.741 00:11:04.741 real 0m7.085s 00:11:04.741 user 0m14.685s 00:11:04.741 sys 0m0.551s 00:11:04.741 ************************************ 00:11:04.741 END TEST event_scheduler 00:11:04.741 13:30:16 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.741 13:30:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:04.741 ************************************ 00:11:04.741 13:30:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:11:04.741 13:30:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:04.741 13:30:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:04.741 13:30:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.741 13:30:16 event -- common/autotest_common.sh@10 -- # set +x 00:11:04.741 ************************************ 00:11:04.741 START TEST app_repeat 00:11:04.741 ************************************ 00:11:04.741 13:30:16 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:11:04.741 13:30:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:04.741 13:30:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:04.741 13:30:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:11:04.741 13:30:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:04.741 13:30:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:11:04.741 13:30:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:11:04.741 13:30:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:11:04.741 13:30:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59451 00:11:04.741 13:30:16 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:04.741 13:30:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:04.741 Process app_repeat pid: 59451 00:11:04.741 spdk_app_start Round 0 00:11:04.741 13:30:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59451' 00:11:04.741 13:30:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:04.741 13:30:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:04.741 13:30:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59451 /var/tmp/spdk-nbd.sock 00:11:04.741 13:30:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59451 ']' 00:11:04.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:04.741 13:30:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:04.741 13:30:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.741 13:30:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:04.741 13:30:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.741 13:30:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:04.741 [2024-11-20 13:30:16.509320] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:11:04.741 [2024-11-20 13:30:16.509461] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59451 ] 00:11:04.741 [2024-11-20 13:30:16.695431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:05.000 [2024-11-20 13:30:16.823108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.000 [2024-11-20 13:30:16.823143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.567 13:30:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.567 13:30:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:05.567 13:30:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:05.825 Malloc0 00:11:05.825 13:30:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:06.466 Malloc1 00:11:06.466 13:30:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:06.466 13:30:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.466 13:30:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:06.466 13:30:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:06.466 13:30:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:06.466 13:30:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:06.466 13:30:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:06.466 13:30:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.466 13:30:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:06.466 13:30:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:06.466 13:30:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:06.466 13:30:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:06.466 13:30:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:06.466 13:30:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:06.466 13:30:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:06.466 13:30:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:06.466 /dev/nbd0 00:11:06.732 13:30:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:06.732 13:30:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:06.733 1+0 records in 00:11:06.733 1+0 records out 00:11:06.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365927 s, 11.2 MB/s 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:06.733 13:30:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:06.733 13:30:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:06.733 13:30:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:06.733 /dev/nbd1 00:11:06.733 13:30:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:06.733 13:30:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:06.733 1+0 records in 00:11:06.733 1+0 records out 00:11:06.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496466 s, 8.3 MB/s 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:06.733 13:30:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:06.733 13:30:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:06.733 13:30:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:06.733 13:30:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:06.733 13:30:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.733 13:30:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:06.991 13:30:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:06.991 { 00:11:06.991 "nbd_device": "/dev/nbd0", 00:11:06.991 "bdev_name": "Malloc0" 00:11:06.991 }, 00:11:06.991 { 00:11:06.991 "nbd_device": "/dev/nbd1", 00:11:06.991 "bdev_name": "Malloc1" 00:11:06.991 } 00:11:06.991 ]' 00:11:06.991 13:30:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:06.991 { 00:11:06.991 "nbd_device": "/dev/nbd0", 00:11:06.991 "bdev_name": "Malloc0" 00:11:06.991 }, 00:11:06.991 { 00:11:06.991 "nbd_device": "/dev/nbd1", 00:11:06.991 "bdev_name": "Malloc1" 00:11:06.991 } 00:11:06.991 ]' 00:11:06.991 13:30:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:07.249 13:30:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:07.249 /dev/nbd1' 00:11:07.249 13:30:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:07.249 /dev/nbd1' 00:11:07.249 13:30:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:07.249 13:30:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:07.249 13:30:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:07.249 13:30:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:07.249 13:30:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:07.249 13:30:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:07.249 13:30:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:07.249 13:30:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:07.249 13:30:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:07.249 13:30:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:07.249 13:30:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:07.249 13:30:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:07.249 256+0 records in 00:11:07.249 256+0 records out 00:11:07.249 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00705958 s, 149 MB/s 00:11:07.249 13:30:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:07.249 13:30:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:07.249 256+0 records in 00:11:07.249 256+0 records out 00:11:07.249 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275213 s, 38.1 MB/s 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:07.249 256+0 records in 00:11:07.249 256+0 records out 00:11:07.249 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0359193 s, 29.2 MB/s 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:07.249 13:30:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:07.507 13:30:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:07.507 13:30:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:07.507 13:30:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:07.507 13:30:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:07.507 13:30:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:07.507 13:30:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:07.507 13:30:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:07.507 13:30:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:07.507 13:30:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:07.507 13:30:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:07.765 13:30:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:07.765 13:30:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:07.765 13:30:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:07.765 13:30:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:07.765 13:30:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:07.765 13:30:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:07.765 13:30:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:07.765 13:30:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:07.765 13:30:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:07.765 13:30:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:07.765 13:30:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:08.023 13:30:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:08.023 13:30:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:08.023 13:30:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:08.023 13:30:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:08.023 13:30:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:08.023 13:30:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:08.023 13:30:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:08.023 13:30:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:08.023 13:30:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:08.023 13:30:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:08.023 13:30:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:08.023 13:30:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:08.023 13:30:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:08.590 13:30:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:09.963 [2024-11-20 13:30:21.527584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:09.963 [2024-11-20 13:30:21.646593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.963 [2024-11-20 13:30:21.646593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.963 [2024-11-20 13:30:21.847630] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:09.963 [2024-11-20 13:30:21.847767] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:11.866 spdk_app_start Round 1 00:11:11.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:11.866 13:30:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:11.866 13:30:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:11.866 13:30:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59451 /var/tmp/spdk-nbd.sock 00:11:11.866 13:30:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59451 ']' 00:11:11.866 13:30:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:11.866 13:30:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.866 13:30:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:11.866 13:30:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.866 13:30:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:11.866 13:30:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.866 13:30:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:11.866 13:30:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:12.125 Malloc0 00:11:12.125 13:30:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:12.385 Malloc1 00:11:12.385 13:30:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:12.385 13:30:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.385 13:30:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:12.385 13:30:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:12.385 13:30:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:12.385 13:30:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:12.385 13:30:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:12.385 13:30:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.385 13:30:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:12.385 13:30:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:12.385 13:30:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:12.385 13:30:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:12.385 13:30:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:12.385 13:30:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:12.385 13:30:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:12.385 13:30:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:12.644 /dev/nbd0 00:11:12.644 13:30:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:12.644 13:30:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:12.644 13:30:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:12.644 13:30:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:12.644 13:30:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:12.644 13:30:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:12.644 13:30:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:12.644 13:30:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:12.644 13:30:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:12.644 13:30:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:12.644 13:30:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:12.644 1+0 records in 00:11:12.644 1+0 records out 00:11:12.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334339 s, 12.3 MB/s 00:11:12.644 13:30:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:12.644 13:30:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:12.644 13:30:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:12.644 13:30:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:12.644 13:30:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:12.644 13:30:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.644 13:30:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:12.644 13:30:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:12.904 /dev/nbd1 00:11:12.904 13:30:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:12.904 13:30:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:12.904 13:30:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:12.904 13:30:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:12.904 13:30:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:12.904 13:30:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:12.904 13:30:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:12.904 13:30:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:12.904 13:30:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:12.904 13:30:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:12.904 13:30:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:12.904 1+0 records in 00:11:12.904 1+0 records out 00:11:12.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396879 s, 10.3 MB/s 00:11:12.904 13:30:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:12.904 13:30:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:12.904 13:30:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:12.904 13:30:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:12.904 13:30:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:12.904 13:30:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.904 13:30:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:12.904 13:30:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:12.904 13:30:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.904 13:30:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:13.162 13:30:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:13.162 { 00:11:13.162 "nbd_device": "/dev/nbd0", 00:11:13.162 "bdev_name": "Malloc0" 00:11:13.162 }, 00:11:13.162 { 00:11:13.162 "nbd_device": "/dev/nbd1", 00:11:13.162 "bdev_name": "Malloc1" 00:11:13.162 } 00:11:13.162 ]' 00:11:13.162 13:30:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:13.162 { 00:11:13.162 "nbd_device": "/dev/nbd0", 00:11:13.162 "bdev_name": "Malloc0" 00:11:13.162 }, 00:11:13.162 { 00:11:13.162 "nbd_device": "/dev/nbd1", 00:11:13.162 "bdev_name": "Malloc1" 00:11:13.162 } 00:11:13.162 ]' 00:11:13.162 13:30:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:13.162 13:30:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:13.162 /dev/nbd1' 00:11:13.162 13:30:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:13.162 /dev/nbd1' 00:11:13.162 13:30:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:13.422 256+0 records in 00:11:13.422 256+0 records out 00:11:13.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116434 s, 90.1 MB/s 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:13.422 256+0 records in 00:11:13.422 256+0 records out 00:11:13.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028335 s, 37.0 MB/s 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:13.422 256+0 records in 00:11:13.422 256+0 records out 00:11:13.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0381267 s, 27.5 MB/s 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.422 13:30:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:13.681 13:30:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:13.681 13:30:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:13.681 13:30:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:13.681 13:30:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.681 13:30:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.681 13:30:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:13.681 13:30:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:13.681 13:30:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.681 13:30:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.681 13:30:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:13.940 13:30:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:13.940 13:30:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:13.940 13:30:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:13.940 13:30:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.940 13:30:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.940 13:30:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:13.940 13:30:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:13.940 13:30:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.940 13:30:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:13.940 13:30:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:13.940 13:30:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:14.198 13:30:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:14.198 13:30:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:14.198 13:30:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:14.198 13:30:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:14.198 13:30:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:14.198 13:30:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:14.198 13:30:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:14.198 13:30:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:14.198 13:30:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:14.198 13:30:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:14.198 13:30:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:14.198 13:30:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:14.198 13:30:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:14.765 13:30:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:15.746 [2024-11-20 13:30:27.660087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:16.005 [2024-11-20 13:30:27.803887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.005 [2024-11-20 13:30:27.803918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.264 [2024-11-20 13:30:28.037256] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:16.264 [2024-11-20 13:30:28.037376] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:17.639 spdk_app_start Round 2 00:11:17.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:17.639 13:30:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:17.639 13:30:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:17.639 13:30:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59451 /var/tmp/spdk-nbd.sock 00:11:17.639 13:30:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59451 ']' 00:11:17.639 13:30:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:17.639 13:30:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.639 13:30:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:17.639 13:30:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.639 13:30:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:17.898 13:30:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.898 13:30:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:17.898 13:30:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:18.156 Malloc0 00:11:18.156 13:30:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:18.724 Malloc1 00:11:18.724 13:30:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:18.724 13:30:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:18.724 13:30:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:18.724 13:30:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:18.724 13:30:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:18.724 13:30:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:18.724 13:30:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:18.724 13:30:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:18.724 13:30:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:18.724 13:30:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:18.724 13:30:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:18.724 13:30:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:18.724 13:30:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:18.724 13:30:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:18.724 13:30:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:18.724 13:30:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:18.983 /dev/nbd0 00:11:18.983 13:30:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:18.983 13:30:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:18.983 13:30:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:18.983 13:30:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:18.983 13:30:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:18.983 13:30:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:18.983 13:30:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:18.983 13:30:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:18.983 13:30:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:18.983 13:30:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:18.983 13:30:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:18.983 1+0 records in 00:11:18.983 1+0 records out 00:11:18.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413361 s, 9.9 MB/s 00:11:18.983 13:30:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:18.983 13:30:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:18.983 13:30:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:18.983 13:30:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:18.983 13:30:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:18.983 13:30:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:18.983 13:30:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:18.983 13:30:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:19.241 /dev/nbd1 00:11:19.241 13:30:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:19.241 13:30:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:19.241 13:30:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:19.241 13:30:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:19.241 13:30:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:19.241 13:30:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:19.241 13:30:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:19.241 13:30:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:19.241 13:30:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:19.241 13:30:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:19.241 13:30:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:19.241 1+0 records in 00:11:19.241 1+0 records out 00:11:19.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000803947 s, 5.1 MB/s 00:11:19.241 13:30:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:19.241 13:30:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:19.241 13:30:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:19.241 13:30:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:19.241 13:30:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:19.241 13:30:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:19.241 13:30:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:19.241 13:30:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:19.241 13:30:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:19.241 13:30:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:19.501 { 00:11:19.501 "nbd_device": "/dev/nbd0", 00:11:19.501 "bdev_name": "Malloc0" 00:11:19.501 }, 00:11:19.501 { 00:11:19.501 "nbd_device": "/dev/nbd1", 00:11:19.501 "bdev_name": "Malloc1" 00:11:19.501 } 00:11:19.501 ]' 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:19.501 { 00:11:19.501 "nbd_device": "/dev/nbd0", 00:11:19.501 "bdev_name": "Malloc0" 00:11:19.501 }, 00:11:19.501 { 00:11:19.501 "nbd_device": "/dev/nbd1", 00:11:19.501 "bdev_name": "Malloc1" 00:11:19.501 } 00:11:19.501 ]' 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:19.501 /dev/nbd1' 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:19.501 /dev/nbd1' 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:19.501 256+0 records in 00:11:19.501 256+0 records out 00:11:19.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139125 s, 75.4 MB/s 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:19.501 256+0 records in 00:11:19.501 256+0 records out 00:11:19.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.032018 s, 32.7 MB/s 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:19.501 256+0 records in 00:11:19.501 256+0 records out 00:11:19.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0381415 s, 27.5 MB/s 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:19.501 13:30:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:19.769 13:30:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:19.769 13:30:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:19.769 13:30:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:19.769 13:30:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:19.769 13:30:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:19.769 13:30:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:19.769 13:30:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:19.769 13:30:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:19.769 13:30:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:19.769 13:30:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:19.769 13:30:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:19.769 13:30:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:19.769 13:30:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:19.769 13:30:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:19.769 13:30:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:19.769 13:30:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:19.769 13:30:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:19.769 13:30:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:20.044 13:30:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:20.044 13:30:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:20.044 13:30:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:20.044 13:30:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:20.044 13:30:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:20.044 13:30:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:20.044 13:30:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:20.044 13:30:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:20.044 13:30:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:20.044 13:30:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:20.044 13:30:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:20.303 13:30:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:20.303 13:30:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:20.303 13:30:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:20.303 13:30:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:20.303 13:30:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:20.303 13:30:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:20.303 13:30:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:20.303 13:30:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:20.303 13:30:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:20.303 13:30:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:20.303 13:30:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:20.303 13:30:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:20.303 13:30:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:20.870 13:30:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:22.247 [2024-11-20 13:30:33.823421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:22.247 [2024-11-20 13:30:33.948212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.247 [2024-11-20 13:30:33.948217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.247 [2024-11-20 13:30:34.152154] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:22.247 [2024-11-20 13:30:34.152245] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:24.153 13:30:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59451 /var/tmp/spdk-nbd.sock 00:11:24.153 13:30:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59451 ']' 00:11:24.153 13:30:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:24.153 13:30:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:24.153 13:30:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:24.153 13:30:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.153 13:30:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:24.153 13:30:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.153 13:30:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:24.153 13:30:35 event.app_repeat -- event/event.sh@39 -- # killprocess 59451 00:11:24.153 13:30:35 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59451 ']' 00:11:24.153 13:30:35 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59451 00:11:24.153 13:30:35 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:11:24.153 13:30:35 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.153 13:30:35 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59451 00:11:24.153 13:30:35 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.153 13:30:35 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.153 13:30:35 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59451' 00:11:24.153 killing process with pid 59451 00:11:24.153 13:30:35 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59451 00:11:24.153 13:30:35 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59451 00:11:25.106 spdk_app_start is called in Round 0. 00:11:25.106 Shutdown signal received, stop current app iteration 00:11:25.106 Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 reinitialization... 00:11:25.106 spdk_app_start is called in Round 1. 00:11:25.106 Shutdown signal received, stop current app iteration 00:11:25.106 Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 reinitialization... 00:11:25.106 spdk_app_start is called in Round 2. 00:11:25.106 Shutdown signal received, stop current app iteration 00:11:25.106 Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 reinitialization... 00:11:25.106 spdk_app_start is called in Round 3. 00:11:25.106 Shutdown signal received, stop current app iteration 00:11:25.106 13:30:37 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:25.106 13:30:37 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:25.106 00:11:25.106 real 0m20.565s 00:11:25.106 user 0m44.011s 00:11:25.106 sys 0m3.517s 00:11:25.106 ************************************ 00:11:25.106 END TEST app_repeat 00:11:25.106 ************************************ 00:11:25.106 13:30:37 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.106 13:30:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:25.366 13:30:37 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:25.366 13:30:37 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:25.366 13:30:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:25.366 13:30:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.366 13:30:37 event -- common/autotest_common.sh@10 -- # set +x 00:11:25.366 ************************************ 00:11:25.366 START TEST cpu_locks 00:11:25.366 ************************************ 00:11:25.366 13:30:37 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:25.366 * Looking for test storage... 00:11:25.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:25.366 13:30:37 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:25.366 13:30:37 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:11:25.366 13:30:37 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:25.366 13:30:37 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:25.366 13:30:37 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:25.366 13:30:37 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:25.366 13:30:37 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:25.366 13:30:37 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:11:25.366 13:30:37 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:11:25.366 13:30:37 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:11:25.366 13:30:37 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:11:25.366 13:30:37 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:11:25.366 13:30:37 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:11:25.366 13:30:37 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:11:25.366 13:30:37 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:25.366 13:30:37 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:11:25.366 13:30:37 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:11:25.366 13:30:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:25.366 13:30:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:25.367 13:30:37 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:11:25.367 13:30:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:11:25.367 13:30:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:25.367 13:30:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:11:25.367 13:30:37 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:11:25.367 13:30:37 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:11:25.367 13:30:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:11:25.367 13:30:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:25.367 13:30:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:11:25.367 13:30:37 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:11:25.367 13:30:37 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:25.367 13:30:37 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:25.367 13:30:37 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:11:25.367 13:30:37 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:25.367 13:30:37 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:25.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.367 --rc genhtml_branch_coverage=1 00:11:25.367 --rc genhtml_function_coverage=1 00:11:25.367 --rc genhtml_legend=1 00:11:25.367 --rc geninfo_all_blocks=1 00:11:25.367 --rc geninfo_unexecuted_blocks=1 00:11:25.367 00:11:25.367 ' 00:11:25.367 13:30:37 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:25.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.367 --rc genhtml_branch_coverage=1 00:11:25.367 --rc genhtml_function_coverage=1 00:11:25.367 --rc genhtml_legend=1 00:11:25.367 --rc geninfo_all_blocks=1 00:11:25.367 --rc geninfo_unexecuted_blocks=1 00:11:25.367 00:11:25.367 ' 00:11:25.367 13:30:37 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:25.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.367 --rc genhtml_branch_coverage=1 00:11:25.367 --rc genhtml_function_coverage=1 00:11:25.367 --rc genhtml_legend=1 00:11:25.367 --rc geninfo_all_blocks=1 00:11:25.367 --rc geninfo_unexecuted_blocks=1 00:11:25.367 00:11:25.367 ' 00:11:25.367 13:30:37 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:25.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.367 --rc genhtml_branch_coverage=1 00:11:25.367 --rc genhtml_function_coverage=1 00:11:25.367 --rc genhtml_legend=1 00:11:25.367 --rc geninfo_all_blocks=1 00:11:25.367 --rc geninfo_unexecuted_blocks=1 00:11:25.367 00:11:25.367 ' 00:11:25.367 13:30:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:25.367 13:30:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:25.367 13:30:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:25.367 13:30:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:25.367 13:30:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:25.367 13:30:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.367 13:30:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:25.626 ************************************ 00:11:25.626 START TEST default_locks 00:11:25.626 ************************************ 00:11:25.626 13:30:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:11:25.626 13:30:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59911 00:11:25.626 13:30:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:25.626 13:30:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59911 00:11:25.626 13:30:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59911 ']' 00:11:25.626 13:30:37 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.626 13:30:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.626 13:30:37 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.626 13:30:37 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.626 13:30:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:25.626 [2024-11-20 13:30:37.445466] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:11:25.626 [2024-11-20 13:30:37.445593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59911 ] 00:11:25.886 [2024-11-20 13:30:37.632226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.886 [2024-11-20 13:30:37.757022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.823 13:30:38 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.823 13:30:38 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:11:26.823 13:30:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59911 00:11:26.823 13:30:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59911 00:11:26.823 13:30:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:27.391 13:30:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59911 00:11:27.391 13:30:39 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59911 ']' 00:11:27.391 13:30:39 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59911 00:11:27.391 13:30:39 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:11:27.391 13:30:39 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.391 13:30:39 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59911 00:11:27.391 13:30:39 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.391 13:30:39 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.391 killing process with pid 59911 00:11:27.391 13:30:39 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59911' 00:11:27.391 13:30:39 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59911 00:11:27.391 13:30:39 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59911 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59911 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59911 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59911 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59911 ']' 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:29.964 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59911) - No such process 00:11:29.964 ERROR: process (pid: 59911) is no longer running 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:29.964 00:11:29.964 real 0m4.367s 00:11:29.964 user 0m4.299s 00:11:29.964 sys 0m0.764s 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.964 ************************************ 00:11:29.964 13:30:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:29.964 END TEST default_locks 00:11:29.964 ************************************ 00:11:29.964 13:30:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:29.964 13:30:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:29.964 13:30:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.964 13:30:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:29.964 ************************************ 00:11:29.964 START TEST default_locks_via_rpc 00:11:29.964 ************************************ 00:11:29.964 13:30:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:11:29.964 13:30:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59991 00:11:29.964 13:30:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:29.964 13:30:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59991 00:11:29.964 13:30:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59991 ']' 00:11:29.964 13:30:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.965 13:30:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.965 13:30:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.965 13:30:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.965 13:30:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.965 [2024-11-20 13:30:41.879682] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:11:29.965 [2024-11-20 13:30:41.880265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59991 ] 00:11:30.224 [2024-11-20 13:30:42.046888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.483 [2024-11-20 13:30:42.212216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.421 13:30:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.421 13:30:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:31.421 13:30:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:31.421 13:30:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.421 13:30:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.421 13:30:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.421 13:30:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:31.421 13:30:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:31.421 13:30:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:31.421 13:30:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:31.421 13:30:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:31.421 13:30:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.421 13:30:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.421 13:30:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.421 13:30:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59991 00:11:31.421 13:30:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59991 00:11:31.421 13:30:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:31.680 13:30:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59991 00:11:31.680 13:30:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59991 ']' 00:11:31.680 13:30:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59991 00:11:31.680 13:30:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:11:31.680 13:30:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.680 13:30:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59991 00:11:31.680 13:30:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.680 13:30:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.680 killing process with pid 59991 00:11:31.680 13:30:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59991' 00:11:31.680 13:30:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59991 00:11:31.680 13:30:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59991 00:11:34.216 00:11:34.216 real 0m4.226s 00:11:34.216 user 0m4.156s 00:11:34.216 sys 0m0.690s 00:11:34.216 13:30:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.216 ************************************ 00:11:34.216 END TEST default_locks_via_rpc 00:11:34.216 ************************************ 00:11:34.216 13:30:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.216 13:30:46 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:34.216 13:30:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:34.216 13:30:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.216 13:30:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:34.216 ************************************ 00:11:34.216 START TEST non_locking_app_on_locked_coremask 00:11:34.216 ************************************ 00:11:34.216 13:30:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:11:34.216 13:30:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60065 00:11:34.216 13:30:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:34.216 13:30:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60065 /var/tmp/spdk.sock 00:11:34.216 13:30:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60065 ']' 00:11:34.216 13:30:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.216 13:30:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.216 13:30:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.216 13:30:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.216 13:30:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:34.474 [2024-11-20 13:30:46.179881] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:11:34.474 [2024-11-20 13:30:46.180021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60065 ] 00:11:34.474 [2024-11-20 13:30:46.362467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.733 [2024-11-20 13:30:46.487081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.671 13:30:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.671 13:30:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:35.671 13:30:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:35.671 13:30:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60087 00:11:35.671 13:30:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60087 /var/tmp/spdk2.sock 00:11:35.671 13:30:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60087 ']' 00:11:35.671 13:30:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:35.671 13:30:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:35.671 13:30:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:35.671 13:30:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.671 13:30:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:35.671 [2024-11-20 13:30:47.503142] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:11:35.671 [2024-11-20 13:30:47.503284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60087 ] 00:11:35.930 [2024-11-20 13:30:47.700877] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:35.930 [2024-11-20 13:30:47.700962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.189 [2024-11-20 13:30:47.940877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.724 13:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:38.724 13:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:38.724 13:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60065 00:11:38.724 13:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60065 00:11:38.724 13:30:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:39.292 13:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60065 00:11:39.292 13:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60065 ']' 00:11:39.292 13:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60065 00:11:39.292 13:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:39.292 13:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.292 13:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60065 00:11:39.292 13:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:39.292 killing process with pid 60065 00:11:39.293 13:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:39.293 13:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60065' 00:11:39.293 13:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60065 00:11:39.293 13:30:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60065 00:11:44.593 13:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60087 00:11:44.593 13:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60087 ']' 00:11:44.593 13:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60087 00:11:44.593 13:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:44.593 13:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.593 13:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60087 00:11:44.593 13:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.593 13:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.593 killing process with pid 60087 00:11:44.593 13:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60087' 00:11:44.593 13:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60087 00:11:44.593 13:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60087 00:11:47.124 00:11:47.124 real 0m12.430s 00:11:47.124 user 0m12.760s 00:11:47.124 sys 0m1.520s 00:11:47.124 13:30:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.124 13:30:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:47.124 ************************************ 00:11:47.124 END TEST non_locking_app_on_locked_coremask 00:11:47.124 ************************************ 00:11:47.124 13:30:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:47.124 13:30:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:47.124 13:30:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.124 13:30:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:47.124 ************************************ 00:11:47.124 START TEST locking_app_on_unlocked_coremask 00:11:47.124 ************************************ 00:11:47.124 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:11:47.124 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60241 00:11:47.124 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60241 /var/tmp/spdk.sock 00:11:47.124 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60241 ']' 00:11:47.124 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.124 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.124 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:47.124 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.124 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.124 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:47.124 [2024-11-20 13:30:58.686351] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:11:47.124 [2024-11-20 13:30:58.686500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60241 ] 00:11:47.124 [2024-11-20 13:30:58.872377] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:47.124 [2024-11-20 13:30:58.872463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.124 [2024-11-20 13:30:59.001244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.062 13:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.062 13:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:48.062 13:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60262 00:11:48.062 13:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:48.062 13:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60262 /var/tmp/spdk2.sock 00:11:48.062 13:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60262 ']' 00:11:48.062 13:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:48.062 13:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:48.062 13:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:48.062 13:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.062 13:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:48.319 [2024-11-20 13:31:00.091410] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:11:48.319 [2024-11-20 13:31:00.091559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60262 ] 00:11:48.577 [2024-11-20 13:31:00.283816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.835 [2024-11-20 13:31:00.535211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.367 13:31:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.367 13:31:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:51.367 13:31:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60262 00:11:51.367 13:31:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:51.367 13:31:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60262 00:11:51.625 13:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60241 00:11:51.625 13:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60241 ']' 00:11:51.625 13:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60241 00:11:51.625 13:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:51.625 13:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.625 13:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60241 00:11:51.885 13:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.885 13:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.885 killing process with pid 60241 00:11:51.885 13:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60241' 00:11:51.885 13:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60241 00:11:51.885 13:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60241 00:11:57.156 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60262 00:11:57.156 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60262 ']' 00:11:57.156 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60262 00:11:57.156 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:57.156 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.156 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60262 00:11:57.156 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:57.156 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:57.156 killing process with pid 60262 00:11:57.156 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60262' 00:11:57.156 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60262 00:11:57.157 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60262 00:11:59.685 00:11:59.685 real 0m12.617s 00:11:59.685 user 0m12.944s 00:11:59.685 sys 0m1.494s 00:11:59.685 13:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.685 13:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:59.685 ************************************ 00:11:59.685 END TEST locking_app_on_unlocked_coremask 00:11:59.685 ************************************ 00:11:59.685 13:31:11 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:59.685 13:31:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:59.685 13:31:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.685 13:31:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:59.685 ************************************ 00:11:59.685 START TEST locking_app_on_locked_coremask 00:11:59.685 ************************************ 00:11:59.685 13:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:11:59.685 13:31:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60416 00:11:59.685 13:31:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:59.686 13:31:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60416 /var/tmp/spdk.sock 00:11:59.686 13:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60416 ']' 00:11:59.686 13:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.686 13:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.686 13:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.686 13:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.686 13:31:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:59.686 [2024-11-20 13:31:11.363448] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:11:59.686 [2024-11-20 13:31:11.363572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60416 ] 00:11:59.686 [2024-11-20 13:31:11.535512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.946 [2024-11-20 13:31:11.652072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.884 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.884 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:00.884 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60437 00:12:00.884 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:00.884 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60437 /var/tmp/spdk2.sock 00:12:00.884 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:12:00.884 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60437 /var/tmp/spdk2.sock 00:12:00.884 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:00.884 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:00.884 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:00.884 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:00.884 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60437 /var/tmp/spdk2.sock 00:12:00.884 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60437 ']' 00:12:00.884 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:00.884 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:00.884 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:00.884 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.884 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:00.884 [2024-11-20 13:31:12.676045] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:12:00.884 [2024-11-20 13:31:12.676188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60437 ] 00:12:01.143 [2024-11-20 13:31:12.865632] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60416 has claimed it. 00:12:01.143 [2024-11-20 13:31:12.865716] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:01.447 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60437) - No such process 00:12:01.447 ERROR: process (pid: 60437) is no longer running 00:12:01.447 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.447 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:12:01.447 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:12:01.447 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:01.447 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:01.447 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:01.447 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60416 00:12:01.447 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60416 00:12:01.447 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:02.016 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60416 00:12:02.016 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60416 ']' 00:12:02.016 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60416 00:12:02.016 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:02.016 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.016 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60416 00:12:02.016 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.016 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.016 killing process with pid 60416 00:12:02.016 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60416' 00:12:02.016 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60416 00:12:02.016 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60416 00:12:04.550 00:12:04.550 real 0m5.048s 00:12:04.550 user 0m5.260s 00:12:04.550 sys 0m0.872s 00:12:04.550 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.550 13:31:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:04.550 ************************************ 00:12:04.550 END TEST locking_app_on_locked_coremask 00:12:04.550 ************************************ 00:12:04.550 13:31:16 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:12:04.550 13:31:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:04.550 13:31:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.550 13:31:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:04.550 ************************************ 00:12:04.550 START TEST locking_overlapped_coremask 00:12:04.550 ************************************ 00:12:04.550 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:12:04.550 13:31:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:12:04.550 13:31:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60507 00:12:04.550 13:31:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60507 /var/tmp/spdk.sock 00:12:04.550 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60507 ']' 00:12:04.550 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.550 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.550 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.550 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.550 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:04.810 [2024-11-20 13:31:16.510286] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:12:04.810 [2024-11-20 13:31:16.510502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60507 ] 00:12:04.810 [2024-11-20 13:31:16.699554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:05.069 [2024-11-20 13:31:16.827118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.069 [2024-11-20 13:31:16.827247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.069 [2024-11-20 13:31:16.827295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.026 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.026 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:06.026 13:31:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60529 00:12:06.026 13:31:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60529 /var/tmp/spdk2.sock 00:12:06.026 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:12:06.026 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60529 /var/tmp/spdk2.sock 00:12:06.026 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:06.026 13:31:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:12:06.026 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:06.026 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:06.026 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:06.026 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60529 /var/tmp/spdk2.sock 00:12:06.026 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60529 ']' 00:12:06.026 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:06.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:06.026 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.026 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:06.026 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.026 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:06.026 [2024-11-20 13:31:17.877136] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:12:06.026 [2024-11-20 13:31:17.877292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60529 ] 00:12:06.285 [2024-11-20 13:31:18.072167] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60507 has claimed it. 00:12:06.285 [2024-11-20 13:31:18.075654] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:06.853 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60529) - No such process 00:12:06.853 ERROR: process (pid: 60529) is no longer running 00:12:06.853 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.853 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:12:06.853 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:12:06.853 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:06.853 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:06.853 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:06.853 13:31:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:12:06.853 13:31:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:06.853 13:31:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:06.853 13:31:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:06.853 13:31:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60507 00:12:06.853 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60507 ']' 00:12:06.853 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60507 00:12:06.853 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:12:06.853 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.854 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60507 00:12:06.854 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.854 killing process with pid 60507 00:12:06.854 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.854 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60507' 00:12:06.854 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60507 00:12:06.854 13:31:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60507 00:12:09.388 00:12:09.388 real 0m4.696s 00:12:09.388 user 0m12.659s 00:12:09.388 sys 0m0.688s 00:12:09.388 ************************************ 00:12:09.388 END TEST locking_overlapped_coremask 00:12:09.388 ************************************ 00:12:09.388 13:31:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.388 13:31:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:09.388 13:31:21 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:12:09.388 13:31:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:09.388 13:31:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.388 13:31:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:09.388 ************************************ 00:12:09.388 START TEST locking_overlapped_coremask_via_rpc 00:12:09.388 ************************************ 00:12:09.389 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:12:09.389 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:12:09.389 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60594 00:12:09.389 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60594 /var/tmp/spdk.sock 00:12:09.389 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60594 ']' 00:12:09.389 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.389 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.389 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.389 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.389 13:31:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.389 [2024-11-20 13:31:21.267655] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:12:09.389 [2024-11-20 13:31:21.268070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60594 ] 00:12:09.648 [2024-11-20 13:31:21.454186] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:09.648 [2024-11-20 13:31:21.454467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:09.648 [2024-11-20 13:31:21.575330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.648 [2024-11-20 13:31:21.575426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.648 [2024-11-20 13:31:21.575455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.661 13:31:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.661 13:31:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:10.661 13:31:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60618 00:12:10.661 13:31:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60618 /var/tmp/spdk2.sock 00:12:10.661 13:31:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:12:10.661 13:31:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60618 ']' 00:12:10.661 13:31:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:10.661 13:31:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:10.661 13:31:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:10.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:10.661 13:31:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:10.661 13:31:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.661 [2024-11-20 13:31:22.589225] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:12:10.661 [2024-11-20 13:31:22.589616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60618 ] 00:12:10.919 [2024-11-20 13:31:22.775711] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:10.919 [2024-11-20 13:31:22.775957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:11.177 [2024-11-20 13:31:23.040966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.177 [2024-11-20 13:31:23.044677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:11.177 [2024-11-20 13:31:23.044677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.704 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.704 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:13.704 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:12:13.704 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.705 [2024-11-20 13:31:25.410876] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60594 has claimed it. 00:12:13.705 request: 00:12:13.705 { 00:12:13.705 "method": "framework_enable_cpumask_locks", 00:12:13.705 "req_id": 1 00:12:13.705 } 00:12:13.705 Got JSON-RPC error response 00:12:13.705 response: 00:12:13.705 { 00:12:13.705 "code": -32603, 00:12:13.705 "message": "Failed to claim CPU core: 2" 00:12:13.705 } 00:12:13.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60594 /var/tmp/spdk.sock 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60594 ']' 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.705 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.971 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.971 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:13.971 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60618 /var/tmp/spdk2.sock 00:12:13.971 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60618 ']' 00:12:13.971 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:13.971 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.971 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:13.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:13.971 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.971 13:31:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.247 13:31:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.247 13:31:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:14.247 13:31:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:12:14.247 13:31:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:14.247 13:31:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:14.247 13:31:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:14.247 00:12:14.247 real 0m4.880s 00:12:14.247 user 0m1.700s 00:12:14.247 sys 0m0.218s 00:12:14.247 13:31:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.247 13:31:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.247 ************************************ 00:12:14.247 END TEST locking_overlapped_coremask_via_rpc 00:12:14.247 ************************************ 00:12:14.247 13:31:26 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:12:14.247 13:31:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60594 ]] 00:12:14.247 13:31:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60594 00:12:14.247 13:31:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60594 ']' 00:12:14.247 13:31:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60594 00:12:14.247 13:31:26 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:12:14.247 13:31:26 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.247 13:31:26 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60594 00:12:14.247 killing process with pid 60594 00:12:14.247 13:31:26 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.247 13:31:26 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.247 13:31:26 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60594' 00:12:14.247 13:31:26 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60594 00:12:14.247 13:31:26 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60594 00:12:16.784 13:31:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60618 ]] 00:12:16.784 13:31:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60618 00:12:16.784 13:31:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60618 ']' 00:12:16.784 13:31:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60618 00:12:16.784 13:31:28 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:12:16.784 13:31:28 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.784 13:31:28 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60618 00:12:16.784 killing process with pid 60618 00:12:16.784 13:31:28 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:16.784 13:31:28 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:16.784 13:31:28 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60618' 00:12:16.784 13:31:28 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60618 00:12:16.784 13:31:28 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60618 00:12:19.364 13:31:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:19.364 13:31:31 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:12:19.364 13:31:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60594 ]] 00:12:19.364 13:31:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60594 00:12:19.364 13:31:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60594 ']' 00:12:19.364 13:31:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60594 00:12:19.364 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60594) - No such process 00:12:19.364 Process with pid 60594 is not found 00:12:19.364 13:31:31 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60594 is not found' 00:12:19.364 13:31:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60618 ]] 00:12:19.364 Process with pid 60618 is not found 00:12:19.364 13:31:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60618 00:12:19.364 13:31:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60618 ']' 00:12:19.364 13:31:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60618 00:12:19.364 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60618) - No such process 00:12:19.364 13:31:31 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60618 is not found' 00:12:19.364 13:31:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:19.364 00:12:19.364 real 0m53.992s 00:12:19.364 user 1m32.267s 00:12:19.364 sys 0m7.610s 00:12:19.364 13:31:31 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.364 13:31:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:19.364 ************************************ 00:12:19.364 END TEST cpu_locks 00:12:19.364 ************************************ 00:12:19.364 00:12:19.364 real 1m27.082s 00:12:19.364 user 2m38.296s 00:12:19.364 sys 0m12.435s 00:12:19.364 ************************************ 00:12:19.364 END TEST event 00:12:19.364 ************************************ 00:12:19.364 13:31:31 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.364 13:31:31 event -- common/autotest_common.sh@10 -- # set +x 00:12:19.364 13:31:31 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:19.364 13:31:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:19.364 13:31:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.365 13:31:31 -- common/autotest_common.sh@10 -- # set +x 00:12:19.365 ************************************ 00:12:19.365 START TEST thread 00:12:19.365 ************************************ 00:12:19.365 13:31:31 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:19.643 * Looking for test storage... 00:12:19.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:19.643 13:31:31 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:19.643 13:31:31 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:12:19.643 13:31:31 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:19.643 13:31:31 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:19.643 13:31:31 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.643 13:31:31 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.643 13:31:31 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.643 13:31:31 thread -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.643 13:31:31 thread -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.643 13:31:31 thread -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.643 13:31:31 thread -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.643 13:31:31 thread -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.643 13:31:31 thread -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.643 13:31:31 thread -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.643 13:31:31 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.643 13:31:31 thread -- scripts/common.sh@344 -- # case "$op" in 00:12:19.643 13:31:31 thread -- scripts/common.sh@345 -- # : 1 00:12:19.643 13:31:31 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.643 13:31:31 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.643 13:31:31 thread -- scripts/common.sh@365 -- # decimal 1 00:12:19.643 13:31:31 thread -- scripts/common.sh@353 -- # local d=1 00:12:19.643 13:31:31 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.643 13:31:31 thread -- scripts/common.sh@355 -- # echo 1 00:12:19.643 13:31:31 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.643 13:31:31 thread -- scripts/common.sh@366 -- # decimal 2 00:12:19.643 13:31:31 thread -- scripts/common.sh@353 -- # local d=2 00:12:19.643 13:31:31 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.643 13:31:31 thread -- scripts/common.sh@355 -- # echo 2 00:12:19.643 13:31:31 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.643 13:31:31 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.643 13:31:31 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.643 13:31:31 thread -- scripts/common.sh@368 -- # return 0 00:12:19.643 13:31:31 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.643 13:31:31 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.643 --rc genhtml_branch_coverage=1 00:12:19.643 --rc genhtml_function_coverage=1 00:12:19.643 --rc genhtml_legend=1 00:12:19.643 --rc geninfo_all_blocks=1 00:12:19.643 --rc geninfo_unexecuted_blocks=1 00:12:19.643 00:12:19.643 ' 00:12:19.643 13:31:31 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.643 --rc genhtml_branch_coverage=1 00:12:19.643 --rc genhtml_function_coverage=1 00:12:19.643 --rc genhtml_legend=1 00:12:19.643 --rc geninfo_all_blocks=1 00:12:19.643 --rc geninfo_unexecuted_blocks=1 00:12:19.643 00:12:19.643 ' 00:12:19.643 13:31:31 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.643 --rc genhtml_branch_coverage=1 00:12:19.643 --rc genhtml_function_coverage=1 00:12:19.643 --rc genhtml_legend=1 00:12:19.643 --rc geninfo_all_blocks=1 00:12:19.643 --rc geninfo_unexecuted_blocks=1 00:12:19.643 00:12:19.643 ' 00:12:19.643 13:31:31 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.643 --rc genhtml_branch_coverage=1 00:12:19.643 --rc genhtml_function_coverage=1 00:12:19.643 --rc genhtml_legend=1 00:12:19.643 --rc geninfo_all_blocks=1 00:12:19.644 --rc geninfo_unexecuted_blocks=1 00:12:19.644 00:12:19.644 ' 00:12:19.644 13:31:31 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:19.644 13:31:31 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:12:19.644 13:31:31 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.644 13:31:31 thread -- common/autotest_common.sh@10 -- # set +x 00:12:19.644 ************************************ 00:12:19.644 START TEST thread_poller_perf 00:12:19.644 ************************************ 00:12:19.644 13:31:31 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:19.644 [2024-11-20 13:31:31.494918] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:12:19.644 [2024-11-20 13:31:31.495141] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60818 ] 00:12:19.902 [2024-11-20 13:31:31.678684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.902 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:19.902 [2024-11-20 13:31:31.798477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.278 [2024-11-20T13:31:33.235Z] ====================================== 00:12:21.278 [2024-11-20T13:31:33.235Z] busy:2498198040 (cyc) 00:12:21.278 [2024-11-20T13:31:33.235Z] total_run_count: 387000 00:12:21.278 [2024-11-20T13:31:33.235Z] tsc_hz: 2490000000 (cyc) 00:12:21.278 [2024-11-20T13:31:33.235Z] ====================================== 00:12:21.278 [2024-11-20T13:31:33.235Z] poller_cost: 6455 (cyc), 2592 (nsec) 00:12:21.278 00:12:21.278 real 0m1.580s 00:12:21.278 user 0m1.373s 00:12:21.278 sys 0m0.098s 00:12:21.278 13:31:33 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.278 ************************************ 00:12:21.278 END TEST thread_poller_perf 00:12:21.278 ************************************ 00:12:21.278 13:31:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:21.278 13:31:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:21.278 13:31:33 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:12:21.278 13:31:33 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.278 13:31:33 thread -- common/autotest_common.sh@10 -- # set +x 00:12:21.278 ************************************ 00:12:21.278 START TEST thread_poller_perf 00:12:21.278 ************************************ 00:12:21.278 13:31:33 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:21.278 [2024-11-20 13:31:33.151525] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:12:21.279 [2024-11-20 13:31:33.151661] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60855 ] 00:12:21.537 [2024-11-20 13:31:33.335746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.537 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:21.537 [2024-11-20 13:31:33.449242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.914 [2024-11-20T13:31:34.871Z] ====================================== 00:12:22.914 [2024-11-20T13:31:34.871Z] busy:2494379606 (cyc) 00:12:22.914 [2024-11-20T13:31:34.871Z] total_run_count: 5103000 00:12:22.914 [2024-11-20T13:31:34.871Z] tsc_hz: 2490000000 (cyc) 00:12:22.914 [2024-11-20T13:31:34.871Z] ====================================== 00:12:22.914 [2024-11-20T13:31:34.871Z] poller_cost: 488 (cyc), 195 (nsec) 00:12:22.914 00:12:22.914 real 0m1.572s 00:12:22.914 user 0m1.371s 00:12:22.914 sys 0m0.094s 00:12:22.914 ************************************ 00:12:22.914 END TEST thread_poller_perf 00:12:22.914 ************************************ 00:12:22.914 13:31:34 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.914 13:31:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:22.914 13:31:34 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:12:22.914 ************************************ 00:12:22.914 END TEST thread 00:12:22.914 ************************************ 00:12:22.914 00:12:22.914 real 0m3.530s 00:12:22.914 user 0m2.924s 00:12:22.914 sys 0m0.391s 00:12:22.914 13:31:34 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.914 13:31:34 thread -- common/autotest_common.sh@10 -- # set +x 00:12:22.914 13:31:34 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:12:22.914 13:31:34 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:22.914 13:31:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:22.914 13:31:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.914 13:31:34 -- common/autotest_common.sh@10 -- # set +x 00:12:22.914 ************************************ 00:12:22.914 START TEST app_cmdline 00:12:22.914 ************************************ 00:12:22.914 13:31:34 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:23.174 * Looking for test storage... 00:12:23.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:23.174 13:31:34 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:23.174 13:31:34 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:12:23.174 13:31:34 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:23.174 13:31:35 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@345 -- # : 1 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.174 13:31:35 app_cmdline -- scripts/common.sh@368 -- # return 0 00:12:23.174 13:31:35 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.174 13:31:35 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:23.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.174 --rc genhtml_branch_coverage=1 00:12:23.174 --rc genhtml_function_coverage=1 00:12:23.174 --rc genhtml_legend=1 00:12:23.174 --rc geninfo_all_blocks=1 00:12:23.174 --rc geninfo_unexecuted_blocks=1 00:12:23.174 00:12:23.174 ' 00:12:23.174 13:31:35 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:23.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.174 --rc genhtml_branch_coverage=1 00:12:23.174 --rc genhtml_function_coverage=1 00:12:23.174 --rc genhtml_legend=1 00:12:23.174 --rc geninfo_all_blocks=1 00:12:23.174 --rc geninfo_unexecuted_blocks=1 00:12:23.174 00:12:23.174 ' 00:12:23.174 13:31:35 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:23.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.174 --rc genhtml_branch_coverage=1 00:12:23.174 --rc genhtml_function_coverage=1 00:12:23.174 --rc genhtml_legend=1 00:12:23.174 --rc geninfo_all_blocks=1 00:12:23.174 --rc geninfo_unexecuted_blocks=1 00:12:23.174 00:12:23.174 ' 00:12:23.174 13:31:35 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:23.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.174 --rc genhtml_branch_coverage=1 00:12:23.174 --rc genhtml_function_coverage=1 00:12:23.174 --rc genhtml_legend=1 00:12:23.174 --rc geninfo_all_blocks=1 00:12:23.174 --rc geninfo_unexecuted_blocks=1 00:12:23.174 00:12:23.174 ' 00:12:23.174 13:31:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:23.174 13:31:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60944 00:12:23.174 13:31:35 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:23.174 13:31:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60944 00:12:23.174 13:31:35 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60944 ']' 00:12:23.174 13:31:35 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.174 13:31:35 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.174 13:31:35 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.174 13:31:35 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.174 13:31:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:23.434 [2024-11-20 13:31:35.137551] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:12:23.434 [2024-11-20 13:31:35.138695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60944 ] 00:12:23.434 [2024-11-20 13:31:35.339178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.694 [2024-11-20 13:31:35.455815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.631 13:31:36 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.631 13:31:36 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:12:24.631 13:31:36 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:24.631 { 00:12:24.631 "version": "SPDK v25.01-pre git sha1 d2ebd983e", 00:12:24.631 "fields": { 00:12:24.631 "major": 25, 00:12:24.631 "minor": 1, 00:12:24.631 "patch": 0, 00:12:24.631 "suffix": "-pre", 00:12:24.631 "commit": "d2ebd983e" 00:12:24.631 } 00:12:24.631 } 00:12:24.631 13:31:36 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:12:24.631 13:31:36 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:24.631 13:31:36 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:24.631 13:31:36 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:24.631 13:31:36 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:24.631 13:31:36 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.631 13:31:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:24.631 13:31:36 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:24.631 13:31:36 app_cmdline -- app/cmdline.sh@26 -- # sort 00:12:24.631 13:31:36 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.631 13:31:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:24.631 13:31:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:24.631 13:31:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:24.631 13:31:36 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:12:24.631 13:31:36 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:24.631 13:31:36 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:24.631 13:31:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:24.631 13:31:36 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:24.631 13:31:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:24.631 13:31:36 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:24.631 13:31:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:24.631 13:31:36 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:24.631 13:31:36 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:24.632 13:31:36 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:24.891 request: 00:12:24.891 { 00:12:24.891 "method": "env_dpdk_get_mem_stats", 00:12:24.891 "req_id": 1 00:12:24.891 } 00:12:24.891 Got JSON-RPC error response 00:12:24.891 response: 00:12:24.891 { 00:12:24.891 "code": -32601, 00:12:24.891 "message": "Method not found" 00:12:24.891 } 00:12:24.891 13:31:36 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:12:24.891 13:31:36 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:24.891 13:31:36 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:24.891 13:31:36 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:24.891 13:31:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60944 00:12:24.891 13:31:36 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60944 ']' 00:12:24.891 13:31:36 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60944 00:12:24.891 13:31:36 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:12:24.891 13:31:36 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.891 13:31:36 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60944 00:12:24.891 killing process with pid 60944 00:12:24.891 13:31:36 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.891 13:31:36 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.891 13:31:36 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60944' 00:12:24.891 13:31:36 app_cmdline -- common/autotest_common.sh@973 -- # kill 60944 00:12:24.891 13:31:36 app_cmdline -- common/autotest_common.sh@978 -- # wait 60944 00:12:27.422 00:12:27.422 real 0m4.567s 00:12:27.422 user 0m4.698s 00:12:27.422 sys 0m0.682s 00:12:27.422 ************************************ 00:12:27.422 END TEST app_cmdline 00:12:27.422 ************************************ 00:12:27.422 13:31:39 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.422 13:31:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:27.680 13:31:39 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:27.680 13:31:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:27.680 13:31:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.680 13:31:39 -- common/autotest_common.sh@10 -- # set +x 00:12:27.680 ************************************ 00:12:27.680 START TEST version 00:12:27.680 ************************************ 00:12:27.680 13:31:39 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:27.680 * Looking for test storage... 00:12:27.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:27.680 13:31:39 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:27.680 13:31:39 version -- common/autotest_common.sh@1693 -- # lcov --version 00:12:27.680 13:31:39 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:27.939 13:31:39 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:27.939 13:31:39 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:27.939 13:31:39 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:27.939 13:31:39 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:27.939 13:31:39 version -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.939 13:31:39 version -- scripts/common.sh@336 -- # read -ra ver1 00:12:27.939 13:31:39 version -- scripts/common.sh@337 -- # IFS=.-: 00:12:27.939 13:31:39 version -- scripts/common.sh@337 -- # read -ra ver2 00:12:27.939 13:31:39 version -- scripts/common.sh@338 -- # local 'op=<' 00:12:27.939 13:31:39 version -- scripts/common.sh@340 -- # ver1_l=2 00:12:27.939 13:31:39 version -- scripts/common.sh@341 -- # ver2_l=1 00:12:27.939 13:31:39 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:27.939 13:31:39 version -- scripts/common.sh@344 -- # case "$op" in 00:12:27.939 13:31:39 version -- scripts/common.sh@345 -- # : 1 00:12:27.939 13:31:39 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:27.939 13:31:39 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.939 13:31:39 version -- scripts/common.sh@365 -- # decimal 1 00:12:27.939 13:31:39 version -- scripts/common.sh@353 -- # local d=1 00:12:27.939 13:31:39 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.939 13:31:39 version -- scripts/common.sh@355 -- # echo 1 00:12:27.939 13:31:39 version -- scripts/common.sh@365 -- # ver1[v]=1 00:12:27.939 13:31:39 version -- scripts/common.sh@366 -- # decimal 2 00:12:27.939 13:31:39 version -- scripts/common.sh@353 -- # local d=2 00:12:27.939 13:31:39 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.939 13:31:39 version -- scripts/common.sh@355 -- # echo 2 00:12:27.939 13:31:39 version -- scripts/common.sh@366 -- # ver2[v]=2 00:12:27.939 13:31:39 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:27.939 13:31:39 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:27.939 13:31:39 version -- scripts/common.sh@368 -- # return 0 00:12:27.939 13:31:39 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.939 13:31:39 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:27.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.939 --rc genhtml_branch_coverage=1 00:12:27.939 --rc genhtml_function_coverage=1 00:12:27.939 --rc genhtml_legend=1 00:12:27.939 --rc geninfo_all_blocks=1 00:12:27.939 --rc geninfo_unexecuted_blocks=1 00:12:27.939 00:12:27.939 ' 00:12:27.939 13:31:39 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:27.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.939 --rc genhtml_branch_coverage=1 00:12:27.939 --rc genhtml_function_coverage=1 00:12:27.939 --rc genhtml_legend=1 00:12:27.939 --rc geninfo_all_blocks=1 00:12:27.939 --rc geninfo_unexecuted_blocks=1 00:12:27.939 00:12:27.939 ' 00:12:27.939 13:31:39 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:27.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.939 --rc genhtml_branch_coverage=1 00:12:27.939 --rc genhtml_function_coverage=1 00:12:27.939 --rc genhtml_legend=1 00:12:27.939 --rc geninfo_all_blocks=1 00:12:27.939 --rc geninfo_unexecuted_blocks=1 00:12:27.939 00:12:27.939 ' 00:12:27.939 13:31:39 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:27.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.939 --rc genhtml_branch_coverage=1 00:12:27.939 --rc genhtml_function_coverage=1 00:12:27.939 --rc genhtml_legend=1 00:12:27.939 --rc geninfo_all_blocks=1 00:12:27.939 --rc geninfo_unexecuted_blocks=1 00:12:27.939 00:12:27.939 ' 00:12:27.939 13:31:39 version -- app/version.sh@17 -- # get_header_version major 00:12:27.939 13:31:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:27.939 13:31:39 version -- app/version.sh@14 -- # cut -f2 00:12:27.939 13:31:39 version -- app/version.sh@14 -- # tr -d '"' 00:12:27.939 13:31:39 version -- app/version.sh@17 -- # major=25 00:12:27.939 13:31:39 version -- app/version.sh@18 -- # get_header_version minor 00:12:27.939 13:31:39 version -- app/version.sh@14 -- # cut -f2 00:12:27.939 13:31:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:27.939 13:31:39 version -- app/version.sh@14 -- # tr -d '"' 00:12:27.939 13:31:39 version -- app/version.sh@18 -- # minor=1 00:12:27.939 13:31:39 version -- app/version.sh@19 -- # get_header_version patch 00:12:27.939 13:31:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:27.939 13:31:39 version -- app/version.sh@14 -- # tr -d '"' 00:12:27.939 13:31:39 version -- app/version.sh@14 -- # cut -f2 00:12:27.939 13:31:39 version -- app/version.sh@19 -- # patch=0 00:12:27.939 13:31:39 version -- app/version.sh@20 -- # get_header_version suffix 00:12:27.939 13:31:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:27.939 13:31:39 version -- app/version.sh@14 -- # cut -f2 00:12:27.939 13:31:39 version -- app/version.sh@14 -- # tr -d '"' 00:12:27.939 13:31:39 version -- app/version.sh@20 -- # suffix=-pre 00:12:27.939 13:31:39 version -- app/version.sh@22 -- # version=25.1 00:12:27.939 13:31:39 version -- app/version.sh@25 -- # (( patch != 0 )) 00:12:27.939 13:31:39 version -- app/version.sh@28 -- # version=25.1rc0 00:12:27.939 13:31:39 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:27.939 13:31:39 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:27.939 13:31:39 version -- app/version.sh@30 -- # py_version=25.1rc0 00:12:27.940 13:31:39 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:12:27.940 00:12:27.940 real 0m0.329s 00:12:27.940 user 0m0.206s 00:12:27.940 sys 0m0.180s 00:12:27.940 13:31:39 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.940 13:31:39 version -- common/autotest_common.sh@10 -- # set +x 00:12:27.940 ************************************ 00:12:27.940 END TEST version 00:12:27.940 ************************************ 00:12:27.940 13:31:39 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:12:27.940 13:31:39 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:12:27.940 13:31:39 -- spdk/autotest.sh@194 -- # uname -s 00:12:27.940 13:31:39 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:12:27.940 13:31:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:12:27.940 13:31:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:12:27.940 13:31:39 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:12:27.940 13:31:39 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:12:27.940 13:31:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:27.940 13:31:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.940 13:31:39 -- common/autotest_common.sh@10 -- # set +x 00:12:27.940 ************************************ 00:12:27.940 START TEST blockdev_nvme 00:12:27.940 ************************************ 00:12:27.940 13:31:39 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:12:28.199 * Looking for test storage... 00:12:28.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:28.199 13:31:39 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:28.199 13:31:39 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:28.199 13:31:39 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:28.199 13:31:40 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.199 13:31:40 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:12:28.199 13:31:40 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.199 13:31:40 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:28.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.199 --rc genhtml_branch_coverage=1 00:12:28.199 --rc genhtml_function_coverage=1 00:12:28.199 --rc genhtml_legend=1 00:12:28.199 --rc geninfo_all_blocks=1 00:12:28.199 --rc geninfo_unexecuted_blocks=1 00:12:28.199 00:12:28.199 ' 00:12:28.199 13:31:40 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:28.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.199 --rc genhtml_branch_coverage=1 00:12:28.199 --rc genhtml_function_coverage=1 00:12:28.199 --rc genhtml_legend=1 00:12:28.199 --rc geninfo_all_blocks=1 00:12:28.199 --rc geninfo_unexecuted_blocks=1 00:12:28.199 00:12:28.199 ' 00:12:28.199 13:31:40 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:28.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.199 --rc genhtml_branch_coverage=1 00:12:28.199 --rc genhtml_function_coverage=1 00:12:28.199 --rc genhtml_legend=1 00:12:28.199 --rc geninfo_all_blocks=1 00:12:28.200 --rc geninfo_unexecuted_blocks=1 00:12:28.200 00:12:28.200 ' 00:12:28.200 13:31:40 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:28.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.200 --rc genhtml_branch_coverage=1 00:12:28.200 --rc genhtml_function_coverage=1 00:12:28.200 --rc genhtml_legend=1 00:12:28.200 --rc geninfo_all_blocks=1 00:12:28.200 --rc geninfo_unexecuted_blocks=1 00:12:28.200 00:12:28.200 ' 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:28.200 13:31:40 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61137 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61137 00:12:28.200 13:31:40 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:28.200 13:31:40 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61137 ']' 00:12:28.200 13:31:40 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.200 13:31:40 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.200 13:31:40 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.200 13:31:40 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.200 13:31:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.458 [2024-11-20 13:31:40.231144] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:12:28.458 [2024-11-20 13:31:40.231480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61137 ] 00:12:28.459 [2024-11-20 13:31:40.402540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.718 [2024-11-20 13:31:40.534826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.654 13:31:41 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.655 13:31:41 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:12:29.655 13:31:41 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:12:29.655 13:31:41 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:12:29.655 13:31:41 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:12:29.655 13:31:41 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:12:29.655 13:31:41 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:29.655 13:31:41 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:12:29.655 13:31:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.655 13:31:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:29.913 13:31:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.913 13:31:41 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:12:29.913 13:31:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.913 13:31:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:30.174 13:31:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.174 13:31:41 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:12:30.174 13:31:41 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:12:30.174 13:31:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.174 13:31:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:30.174 13:31:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.174 13:31:41 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:12:30.174 13:31:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.174 13:31:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:30.174 13:31:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.174 13:31:41 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:30.174 13:31:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.175 13:31:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:30.175 13:31:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.175 13:31:41 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:12:30.175 13:31:41 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:12:30.175 13:31:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.175 13:31:41 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:12:30.175 13:31:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:30.175 13:31:42 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.175 13:31:42 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:12:30.175 13:31:42 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "cb4c841e-4992-4032-b60b-4819029169c7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cb4c841e-4992-4032-b60b-4819029169c7",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "48d8a371-171b-4f04-9bee-ecbdb00a9827"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "48d8a371-171b-4f04-9bee-ecbdb00a9827",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2ed0113d-9e4b-415f-a552-781e69e3626c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2ed0113 13:31:42 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:12:30.175 d-9e4b-415f-a552-781e69e3626c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "bf606e01-1c0c-4c8f-93e8-422d1a56f96c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bf606e01-1c0c-4c8f-93e8-422d1a56f96c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "f745edce-6dfc-4094-9887-c10fbe193f98"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f745edce-6dfc-4094-9887-c10fbe193f98",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d8313f61-6a2b-42e3-ac6a-3e0da11703cd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d8313f61-6a2b-42e3-ac6a-3e0da11703cd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:12:30.440 13:31:42 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:12:30.440 13:31:42 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:12:30.440 13:31:42 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:12:30.440 13:31:42 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61137 00:12:30.440 13:31:42 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61137 ']' 00:12:30.440 13:31:42 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61137 00:12:30.440 13:31:42 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:12:30.440 13:31:42 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.440 13:31:42 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61137 00:12:30.440 killing process with pid 61137 00:12:30.440 13:31:42 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.440 13:31:42 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.440 13:31:42 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61137' 00:12:30.440 13:31:42 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61137 00:12:30.440 13:31:42 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61137 00:12:32.975 13:31:44 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:32.975 13:31:44 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:32.975 13:31:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:32.975 13:31:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.975 13:31:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:32.975 ************************************ 00:12:32.975 START TEST bdev_hello_world 00:12:32.975 ************************************ 00:12:32.975 13:31:44 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:32.975 [2024-11-20 13:31:44.876246] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:12:32.975 [2024-11-20 13:31:44.876543] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61233 ] 00:12:33.234 [2024-11-20 13:31:45.065547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.493 [2024-11-20 13:31:45.197772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.061 [2024-11-20 13:31:45.884610] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:34.061 [2024-11-20 13:31:45.884659] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:12:34.061 [2024-11-20 13:31:45.884684] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:34.061 [2024-11-20 13:31:45.888222] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:34.061 [2024-11-20 13:31:45.888764] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:34.061 [2024-11-20 13:31:45.888915] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:34.061 [2024-11-20 13:31:45.889146] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:34.061 00:12:34.061 [2024-11-20 13:31:45.889354] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:35.436 00:12:35.436 real 0m2.325s 00:12:35.436 ************************************ 00:12:35.436 END TEST bdev_hello_world 00:12:35.436 ************************************ 00:12:35.436 user 0m1.962s 00:12:35.436 sys 0m0.252s 00:12:35.436 13:31:47 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.436 13:31:47 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:35.436 13:31:47 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:12:35.436 13:31:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:35.436 13:31:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.436 13:31:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:35.436 ************************************ 00:12:35.436 START TEST bdev_bounds 00:12:35.436 ************************************ 00:12:35.436 13:31:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:12:35.436 13:31:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61281 00:12:35.436 13:31:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:35.436 13:31:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:35.436 13:31:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61281' 00:12:35.436 Process bdevio pid: 61281 00:12:35.436 13:31:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61281 00:12:35.436 13:31:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61281 ']' 00:12:35.436 13:31:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.436 13:31:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.436 13:31:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.436 13:31:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.436 13:31:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:35.436 [2024-11-20 13:31:47.269512] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:12:35.436 [2024-11-20 13:31:47.269678] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61281 ] 00:12:35.696 [2024-11-20 13:31:47.455736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:35.696 [2024-11-20 13:31:47.587372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.696 [2024-11-20 13:31:47.588123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.696 [2024-11-20 13:31:47.588123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.634 13:31:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.634 13:31:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:12:36.634 13:31:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:36.634 I/O targets: 00:12:36.634 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:36.634 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:36.634 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:36.634 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:36.634 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:36.634 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:36.634 00:12:36.634 00:12:36.634 CUnit - A unit testing framework for C - Version 2.1-3 00:12:36.634 http://cunit.sourceforge.net/ 00:12:36.634 00:12:36.634 00:12:36.634 Suite: bdevio tests on: Nvme3n1 00:12:36.634 Test: blockdev write read block ...passed 00:12:36.634 Test: blockdev write zeroes read block ...passed 00:12:36.634 Test: blockdev write zeroes read no split ...passed 00:12:36.634 Test: blockdev write zeroes read split ...passed 00:12:36.634 Test: blockdev write zeroes read split partial ...passed 00:12:36.634 Test: blockdev reset ...[2024-11-20 13:31:48.515595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:12:36.634 [2024-11-20 13:31:48.519649] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:12:36.634 Test: blockdev write read 8 blocks ...uccessful. 00:12:36.634 passed 00:12:36.634 Test: blockdev write read size > 128k ...passed 00:12:36.634 Test: blockdev write read invalid size ...passed 00:12:36.634 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.634 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.634 Test: blockdev write read max offset ...passed 00:12:36.634 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.634 Test: blockdev writev readv 8 blocks ...passed 00:12:36.634 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.634 Test: blockdev writev readv block ...passed 00:12:36.634 Test: blockdev writev readv size > 128k ...passed 00:12:36.634 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.634 Test: blockdev comparev and writev ...[2024-11-20 13:31:48.529126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b420a000 len:0x1000 00:12:36.634 [2024-11-20 13:31:48.529180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:36.634 passed 00:12:36.634 Test: blockdev nvme passthru rw ...passed 00:12:36.634 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:31:48.529961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:36.634 passed 00:12:36.634 Test: blockdev nvme admin passthru ...[2024-11-20 13:31:48.529997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:36.634 passed 00:12:36.634 Test: blockdev copy ...passed 00:12:36.634 Suite: bdevio tests on: Nvme2n3 00:12:36.634 Test: blockdev write read block ...passed 00:12:36.634 Test: blockdev write zeroes read block ...passed 00:12:36.634 Test: blockdev write zeroes read no split ...passed 00:12:36.634 Test: blockdev write zeroes read split ...passed 00:12:36.894 Test: blockdev write zeroes read split partial ...passed 00:12:36.894 Test: blockdev reset ...[2024-11-20 13:31:48.610785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:36.894 [2024-11-20 13:31:48.615391] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:12:36.894 Test: blockdev write read 8 blocks ...uccessful. 00:12:36.894 passed 00:12:36.894 Test: blockdev write read size > 128k ...passed 00:12:36.894 Test: blockdev write read invalid size ...passed 00:12:36.894 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.894 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.894 Test: blockdev write read max offset ...passed 00:12:36.894 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.894 Test: blockdev writev readv 8 blocks ...passed 00:12:36.894 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.894 Test: blockdev writev readv block ...passed 00:12:36.894 Test: blockdev writev readv size > 128k ...passed 00:12:36.894 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.894 Test: blockdev comparev and writev ...[2024-11-20 13:31:48.625711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x296c06000 len:0x1000 00:12:36.894 [2024-11-20 13:31:48.625763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:36.894 passed 00:12:36.894 Test: blockdev nvme passthru rw ...passed 00:12:36.894 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.894 Test: blockdev nvme admin passthru ...[2024-11-20 13:31:48.626621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:36.894 [2024-11-20 13:31:48.626662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:36.894 passed 00:12:36.894 Test: blockdev copy ...passed 00:12:36.894 Suite: bdevio tests on: Nvme2n2 00:12:36.894 Test: blockdev write read block ...passed 00:12:36.894 Test: blockdev write zeroes read block ...passed 00:12:36.894 Test: blockdev write zeroes read no split ...passed 00:12:36.894 Test: blockdev write zeroes read split ...passed 00:12:36.894 Test: blockdev write zeroes read split partial ...passed 00:12:36.894 Test: blockdev reset ...[2024-11-20 13:31:48.705844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:36.894 passed 00:12:36.894 Test: blockdev write read 8 blocks ...[2024-11-20 13:31:48.710077] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:36.894 passed 00:12:36.894 Test: blockdev write read size > 128k ...passed 00:12:36.894 Test: blockdev write read invalid size ...passed 00:12:36.894 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.894 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.894 Test: blockdev write read max offset ...passed 00:12:36.894 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.894 Test: blockdev writev readv 8 blocks ...passed 00:12:36.894 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.894 Test: blockdev writev readv block ...passed 00:12:36.895 Test: blockdev writev readv size > 128k ...passed 00:12:36.895 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.895 Test: blockdev comparev and writev ...[2024-11-20 13:31:48.718451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:12:36.895 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c423c000 len:0x1000 00:12:36.895 [2024-11-20 13:31:48.718644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:36.895 passed 00:12:36.895 Test: blockdev nvme passthru vendor specific ...passed 00:12:36.895 Test: blockdev nvme admin passthru ...[2024-11-20 13:31:48.719474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:36.895 [2024-11-20 13:31:48.719517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:36.895 passed 00:12:36.895 Test: blockdev copy ...passed 00:12:36.895 Suite: bdevio tests on: Nvme2n1 00:12:36.895 Test: blockdev write read block ...passed 00:12:36.895 Test: blockdev write zeroes read block ...passed 00:12:36.895 Test: blockdev write zeroes read no split ...passed 00:12:36.895 Test: blockdev write zeroes read split ...passed 00:12:36.895 Test: blockdev write zeroes read split partial ...passed 00:12:36.895 Test: blockdev reset ...[2024-11-20 13:31:48.798073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:36.895 [2024-11-20 13:31:48.802249] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:12:36.895 Test: blockdev write read 8 blocks ...uccessful. 00:12:36.895 passed 00:12:36.895 Test: blockdev write read size > 128k ...passed 00:12:36.895 Test: blockdev write read invalid size ...passed 00:12:36.895 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.895 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.895 Test: blockdev write read max offset ...passed 00:12:36.895 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.895 Test: blockdev writev readv 8 blocks ...passed 00:12:36.895 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.895 Test: blockdev writev readv block ...passed 00:12:36.895 Test: blockdev writev readv size > 128k ...passed 00:12:36.895 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.895 Test: blockdev comparev and writev ...[2024-11-20 13:31:48.811735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4238000 len:0x1000 00:12:36.895 [2024-11-20 13:31:48.811943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:36.895 passed 00:12:36.895 Test: blockdev nvme passthru rw ...passed 00:12:36.895 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:31:48.813206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:36.895 [2024-11-20 13:31:48.813375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:36.895 passed 00:12:36.895 Test: blockdev nvme admin passthru ...passed 00:12:36.895 Test: blockdev copy ...passed 00:12:36.895 Suite: bdevio tests on: Nvme1n1 00:12:36.895 Test: blockdev write read block ...passed 00:12:36.895 Test: blockdev write zeroes read block ...passed 00:12:36.895 Test: blockdev write zeroes read no split ...passed 00:12:37.154 Test: blockdev write zeroes read split ...passed 00:12:37.154 Test: blockdev write zeroes read split partial ...passed 00:12:37.154 Test: blockdev reset ...[2024-11-20 13:31:48.888884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:12:37.154 passed 00:12:37.154 Test: blockdev write read 8 blocks ...[2024-11-20 13:31:48.892850] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:12:37.154 passed 00:12:37.154 Test: blockdev write read size > 128k ...passed 00:12:37.154 Test: blockdev write read invalid size ...passed 00:12:37.154 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:37.154 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:37.154 Test: blockdev write read max offset ...passed 00:12:37.154 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:37.154 Test: blockdev writev readv 8 blocks ...passed 00:12:37.154 Test: blockdev writev readv 30 x 1block ...passed 00:12:37.154 Test: blockdev writev readv block ...passed 00:12:37.154 Test: blockdev writev readv size > 128k ...passed 00:12:37.154 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:37.154 Test: blockdev comparev and writev ...[2024-11-20 13:31:48.901100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4234000 len:0x1000 00:12:37.154 [2024-11-20 13:31:48.901154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:37.154 passed 00:12:37.154 Test: blockdev nvme passthru rw ...passed 00:12:37.154 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:31:48.901897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:12:37.154 Test: blockdev nvme admin passthru ...RP2 0x0 00:12:37.154 [2024-11-20 13:31:48.902039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:37.154 passed 00:12:37.154 Test: blockdev copy ...passed 00:12:37.154 Suite: bdevio tests on: Nvme0n1 00:12:37.154 Test: blockdev write read block ...passed 00:12:37.154 Test: blockdev write zeroes read block ...passed 00:12:37.154 Test: blockdev write zeroes read no split ...passed 00:12:37.154 Test: blockdev write zeroes read split ...passed 00:12:37.154 Test: blockdev write zeroes read split partial ...passed 00:12:37.154 Test: blockdev reset ...[2024-11-20 13:31:48.983343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:12:37.154 passed 00:12:37.154 Test: blockdev write read 8 blocks ...[2024-11-20 13:31:48.987336] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:12:37.154 passed 00:12:37.154 Test: blockdev write read size > 128k ...passed 00:12:37.154 Test: blockdev write read invalid size ...passed 00:12:37.154 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:37.154 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:37.154 Test: blockdev write read max offset ...passed 00:12:37.154 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:37.154 Test: blockdev writev readv 8 blocks ...passed 00:12:37.154 Test: blockdev writev readv 30 x 1block ...passed 00:12:37.154 Test: blockdev writev readv block ...passed 00:12:37.154 Test: blockdev writev readv size > 128k ...passed 00:12:37.154 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:37.154 Test: blockdev comparev and writev ...passed 00:12:37.154 Test: blockdev nvme passthru rw ...[2024-11-20 13:31:48.994815] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:12:37.154 separate metadata which is not supported yet. 00:12:37.154 passed 00:12:37.154 Test: blockdev nvme passthru vendor specific ...passed 00:12:37.154 Test: blockdev nvme admin passthru ...[2024-11-20 13:31:48.995370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:12:37.154 [2024-11-20 13:31:48.995421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:12:37.154 passed 00:12:37.154 Test: blockdev copy ...passed 00:12:37.154 00:12:37.154 Run Summary: Type Total Ran Passed Failed Inactive 00:12:37.154 suites 6 6 n/a 0 0 00:12:37.154 tests 138 138 138 0 0 00:12:37.154 asserts 893 893 893 0 n/a 00:12:37.154 00:12:37.154 Elapsed time = 1.609 seconds 00:12:37.154 0 00:12:37.154 13:31:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61281 00:12:37.154 13:31:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61281 ']' 00:12:37.154 13:31:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61281 00:12:37.154 13:31:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:12:37.154 13:31:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.154 13:31:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61281 00:12:37.154 13:31:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.154 13:31:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.154 13:31:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61281' 00:12:37.154 killing process with pid 61281 00:12:37.154 13:31:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61281 00:12:37.154 13:31:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61281 00:12:38.546 13:31:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:38.546 00:12:38.546 real 0m3.003s 00:12:38.546 user 0m7.618s 00:12:38.546 sys 0m0.424s 00:12:38.546 ************************************ 00:12:38.546 END TEST bdev_bounds 00:12:38.546 ************************************ 00:12:38.546 13:31:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.546 13:31:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:38.546 13:31:50 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:38.546 13:31:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:38.546 13:31:50 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.546 13:31:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:38.546 ************************************ 00:12:38.546 START TEST bdev_nbd 00:12:38.546 ************************************ 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61346 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61346 /var/tmp/spdk-nbd.sock 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61346 ']' 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:38.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.546 13:31:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:38.546 [2024-11-20 13:31:50.357189] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:12:38.546 [2024-11-20 13:31:50.357636] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.804 [2024-11-20 13:31:50.538677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.804 [2024-11-20 13:31:50.657534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.740 1+0 records in 00:12:39.740 1+0 records out 00:12:39.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059112 s, 6.9 MB/s 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:39.740 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.999 1+0 records in 00:12:39.999 1+0 records out 00:12:39.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000622867 s, 6.6 MB/s 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:39.999 13:31:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:12:40.263 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:40.263 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:40.263 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:40.263 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:12:40.263 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:40.263 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:40.263 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:40.263 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:12:40.263 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:40.263 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:40.263 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:40.263 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.263 1+0 records in 00:12:40.263 1+0 records out 00:12:40.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000743387 s, 5.5 MB/s 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.543 1+0 records in 00:12:40.543 1+0 records out 00:12:40.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620394 s, 6.6 MB/s 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:40.543 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.802 1+0 records in 00:12:40.802 1+0 records out 00:12:40.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604783 s, 6.8 MB/s 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:40.802 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:12:41.060 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:41.060 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:41.060 13:31:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:41.060 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:12:41.060 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:41.060 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:41.060 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:41.060 13:31:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:12:41.060 13:31:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:41.060 13:31:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:41.060 13:31:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:41.060 13:31:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.060 1+0 records in 00:12:41.060 1+0 records out 00:12:41.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000723897 s, 5.7 MB/s 00:12:41.060 13:31:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.060 13:31:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:41.060 13:31:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.318 13:31:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:41.318 13:31:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:41.318 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:41.318 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:41.318 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:41.318 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:41.318 { 00:12:41.318 "nbd_device": "/dev/nbd0", 00:12:41.318 "bdev_name": "Nvme0n1" 00:12:41.318 }, 00:12:41.318 { 00:12:41.318 "nbd_device": "/dev/nbd1", 00:12:41.318 "bdev_name": "Nvme1n1" 00:12:41.318 }, 00:12:41.318 { 00:12:41.318 "nbd_device": "/dev/nbd2", 00:12:41.318 "bdev_name": "Nvme2n1" 00:12:41.318 }, 00:12:41.318 { 00:12:41.318 "nbd_device": "/dev/nbd3", 00:12:41.318 "bdev_name": "Nvme2n2" 00:12:41.318 }, 00:12:41.318 { 00:12:41.318 "nbd_device": "/dev/nbd4", 00:12:41.318 "bdev_name": "Nvme2n3" 00:12:41.318 }, 00:12:41.318 { 00:12:41.318 "nbd_device": "/dev/nbd5", 00:12:41.318 "bdev_name": "Nvme3n1" 00:12:41.318 } 00:12:41.318 ]' 00:12:41.318 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:41.318 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:41.318 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:41.318 { 00:12:41.318 "nbd_device": "/dev/nbd0", 00:12:41.318 "bdev_name": "Nvme0n1" 00:12:41.318 }, 00:12:41.318 { 00:12:41.318 "nbd_device": "/dev/nbd1", 00:12:41.318 "bdev_name": "Nvme1n1" 00:12:41.318 }, 00:12:41.318 { 00:12:41.318 "nbd_device": "/dev/nbd2", 00:12:41.318 "bdev_name": "Nvme2n1" 00:12:41.318 }, 00:12:41.318 { 00:12:41.318 "nbd_device": "/dev/nbd3", 00:12:41.318 "bdev_name": "Nvme2n2" 00:12:41.318 }, 00:12:41.318 { 00:12:41.318 "nbd_device": "/dev/nbd4", 00:12:41.318 "bdev_name": "Nvme2n3" 00:12:41.318 }, 00:12:41.318 { 00:12:41.318 "nbd_device": "/dev/nbd5", 00:12:41.318 "bdev_name": "Nvme3n1" 00:12:41.318 } 00:12:41.318 ]' 00:12:41.576 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:41.576 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.576 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:41.576 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:41.576 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:41.576 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.576 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:41.834 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:41.834 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:41.834 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:41.834 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:41.834 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.834 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:41.834 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:41.834 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:41.834 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.834 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:42.092 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:42.092 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:42.092 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:42.092 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.092 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.092 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:42.092 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:42.092 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.092 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.092 13:31:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.350 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:42.608 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:42.608 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:42.608 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:42.608 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.608 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.608 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:42.608 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:42.608 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.608 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.608 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:42.867 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:42.867 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:42.867 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:42.867 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.867 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.867 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:42.867 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:42.867 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.867 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:42.867 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.867 13:31:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:43.125 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:43.125 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:43.125 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:43.383 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:43.383 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:43.383 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:43.384 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:12:43.643 /dev/nbd0 00:12:43.643 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:43.643 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:43.643 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:43.643 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:43.643 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:43.643 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:43.643 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:43.643 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:43.643 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:43.643 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:43.643 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.643 1+0 records in 00:12:43.643 1+0 records out 00:12:43.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000622935 s, 6.6 MB/s 00:12:43.643 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.643 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:43.643 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.643 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:43.643 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:43.643 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.643 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:43.643 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:12:43.643 /dev/nbd1 00:12:43.902 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:43.902 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:43.902 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:43.902 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:43.902 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:43.902 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:43.902 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:43.902 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:43.902 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:43.902 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:43.902 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.902 1+0 records in 00:12:43.902 1+0 records out 00:12:43.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415264 s, 9.9 MB/s 00:12:43.902 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.902 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:43.902 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.902 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:43.902 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:43.902 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.902 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:43.902 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:12:44.162 /dev/nbd10 00:12:44.162 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:44.162 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:44.162 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:12:44.162 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:44.162 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:44.162 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:44.162 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:12:44.162 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:44.162 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:44.162 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:44.162 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.162 1+0 records in 00:12:44.162 1+0 records out 00:12:44.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660871 s, 6.2 MB/s 00:12:44.162 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.162 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:44.162 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.162 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:44.162 13:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:44.162 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.162 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:44.162 13:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:12:44.162 /dev/nbd11 00:12:44.422 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:44.422 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:44.422 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:12:44.422 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:44.422 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:44.422 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:44.422 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:12:44.422 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:44.422 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:44.422 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:44.422 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.422 1+0 records in 00:12:44.422 1+0 records out 00:12:44.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000713396 s, 5.7 MB/s 00:12:44.422 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.422 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:44.422 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.422 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:44.422 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:44.422 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.422 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:44.422 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:12:44.422 /dev/nbd12 00:12:44.681 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:44.681 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:44.681 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:12:44.681 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:44.681 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:44.681 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:44.681 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:12:44.681 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:44.681 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:44.681 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:44.681 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.681 1+0 records in 00:12:44.681 1+0 records out 00:12:44.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000781614 s, 5.2 MB/s 00:12:44.681 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.681 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:44.681 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.681 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:44.681 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:44.681 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.681 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:44.681 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:12:44.940 /dev/nbd13 00:12:44.940 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:44.940 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:44.940 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:12:44.940 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:44.940 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:44.940 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:44.940 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:12:44.940 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:44.940 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:44.940 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:44.940 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.940 1+0 records in 00:12:44.940 1+0 records out 00:12:44.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654262 s, 6.3 MB/s 00:12:44.941 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.941 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:44.941 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.941 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:44.941 13:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:44.941 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.941 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:44.941 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:44.941 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:44.941 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:45.199 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:45.199 { 00:12:45.199 "nbd_device": "/dev/nbd0", 00:12:45.199 "bdev_name": "Nvme0n1" 00:12:45.199 }, 00:12:45.199 { 00:12:45.199 "nbd_device": "/dev/nbd1", 00:12:45.199 "bdev_name": "Nvme1n1" 00:12:45.199 }, 00:12:45.199 { 00:12:45.199 "nbd_device": "/dev/nbd10", 00:12:45.199 "bdev_name": "Nvme2n1" 00:12:45.199 }, 00:12:45.199 { 00:12:45.199 "nbd_device": "/dev/nbd11", 00:12:45.199 "bdev_name": "Nvme2n2" 00:12:45.199 }, 00:12:45.199 { 00:12:45.199 "nbd_device": "/dev/nbd12", 00:12:45.199 "bdev_name": "Nvme2n3" 00:12:45.199 }, 00:12:45.199 { 00:12:45.199 "nbd_device": "/dev/nbd13", 00:12:45.199 "bdev_name": "Nvme3n1" 00:12:45.199 } 00:12:45.199 ]' 00:12:45.199 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:45.199 { 00:12:45.199 "nbd_device": "/dev/nbd0", 00:12:45.199 "bdev_name": "Nvme0n1" 00:12:45.199 }, 00:12:45.199 { 00:12:45.199 "nbd_device": "/dev/nbd1", 00:12:45.199 "bdev_name": "Nvme1n1" 00:12:45.199 }, 00:12:45.199 { 00:12:45.199 "nbd_device": "/dev/nbd10", 00:12:45.199 "bdev_name": "Nvme2n1" 00:12:45.199 }, 00:12:45.199 { 00:12:45.199 "nbd_device": "/dev/nbd11", 00:12:45.199 "bdev_name": "Nvme2n2" 00:12:45.199 }, 00:12:45.199 { 00:12:45.199 "nbd_device": "/dev/nbd12", 00:12:45.199 "bdev_name": "Nvme2n3" 00:12:45.199 }, 00:12:45.199 { 00:12:45.199 "nbd_device": "/dev/nbd13", 00:12:45.199 "bdev_name": "Nvme3n1" 00:12:45.199 } 00:12:45.199 ]' 00:12:45.199 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:45.199 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:45.199 /dev/nbd1 00:12:45.199 /dev/nbd10 00:12:45.199 /dev/nbd11 00:12:45.199 /dev/nbd12 00:12:45.200 /dev/nbd13' 00:12:45.200 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:45.200 /dev/nbd1 00:12:45.200 /dev/nbd10 00:12:45.200 /dev/nbd11 00:12:45.200 /dev/nbd12 00:12:45.200 /dev/nbd13' 00:12:45.200 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:45.200 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:45.200 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:45.200 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:45.200 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:45.200 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:45.200 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:45.200 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:45.200 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:45.200 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:45.200 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:45.200 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:45.200 256+0 records in 00:12:45.200 256+0 records out 00:12:45.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432537 s, 242 MB/s 00:12:45.200 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:45.200 13:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:45.200 256+0 records in 00:12:45.200 256+0 records out 00:12:45.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116513 s, 9.0 MB/s 00:12:45.200 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:45.200 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:45.487 256+0 records in 00:12:45.487 256+0 records out 00:12:45.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128003 s, 8.2 MB/s 00:12:45.487 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:45.487 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:45.487 256+0 records in 00:12:45.487 256+0 records out 00:12:45.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156341 s, 6.7 MB/s 00:12:45.487 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:45.487 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:45.745 256+0 records in 00:12:45.745 256+0 records out 00:12:45.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120741 s, 8.7 MB/s 00:12:45.745 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:45.745 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:45.745 256+0 records in 00:12:45.745 256+0 records out 00:12:45.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117182 s, 8.9 MB/s 00:12:45.745 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:45.745 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:46.004 256+0 records in 00:12:46.004 256+0 records out 00:12:46.004 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118096 s, 8.9 MB/s 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.004 13:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:46.262 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:46.262 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:46.262 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:46.262 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.262 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.262 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:46.262 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:46.262 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.262 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.262 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:46.521 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:46.521 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:46.521 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:46.521 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.521 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.521 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:46.521 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:46.521 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.521 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.521 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:46.778 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:46.778 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:46.778 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:46.778 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.778 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.778 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:46.778 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:46.778 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.778 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.778 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:47.037 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:47.037 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:47.037 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:47.037 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.037 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.037 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:47.037 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:47.037 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.037 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.037 13:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:47.296 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:47.296 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:47.296 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:47.296 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.296 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.296 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:47.296 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:47.296 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.296 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.296 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:47.554 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:47.554 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:47.554 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:47.554 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.554 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.554 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:47.554 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:47.554 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.554 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:47.554 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:47.554 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:47.814 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:47.814 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:47.814 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:47.814 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:47.814 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:47.814 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:47.814 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:47.814 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:47.814 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:47.814 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:47.814 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:47.814 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:47.814 13:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:47.814 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:47.814 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:47.814 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:48.073 malloc_lvol_verify 00:12:48.073 13:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:48.332 bb853392-9a6b-4f47-a146-8fb3fe6bc957 00:12:48.332 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:48.590 96580bf4-257c-4c4e-b964-a377d8dc7dc0 00:12:48.590 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:48.848 /dev/nbd0 00:12:48.848 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:48.848 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:48.848 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:48.848 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:48.848 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:48.848 mke2fs 1.47.0 (5-Feb-2023) 00:12:48.848 Discarding device blocks: 0/4096 done 00:12:48.848 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:48.848 00:12:48.848 Allocating group tables: 0/1 done 00:12:48.848 Writing inode tables: 0/1 done 00:12:48.848 Creating journal (1024 blocks): done 00:12:48.848 Writing superblocks and filesystem accounting information: 0/1 done 00:12:48.848 00:12:48.848 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:48.848 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:48.848 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:48.848 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:48.848 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:48.848 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.848 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:49.215 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:49.215 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:49.215 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:49.215 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.215 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.215 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:49.215 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:49.215 13:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.215 13:32:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61346 00:12:49.215 13:32:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61346 ']' 00:12:49.215 13:32:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61346 00:12:49.215 13:32:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:12:49.215 13:32:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.215 13:32:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61346 00:12:49.215 13:32:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:49.215 13:32:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:49.215 killing process with pid 61346 00:12:49.215 13:32:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61346' 00:12:49.215 13:32:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61346 00:12:49.215 13:32:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61346 00:12:50.615 13:32:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:50.615 00:12:50.615 real 0m11.964s 00:12:50.615 user 0m15.736s 00:12:50.615 sys 0m4.854s 00:12:50.615 ************************************ 00:12:50.615 END TEST bdev_nbd 00:12:50.615 ************************************ 00:12:50.615 13:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.615 13:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:50.615 13:32:02 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:12:50.615 13:32:02 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:12:50.615 skipping fio tests on NVMe due to multi-ns failures. 00:12:50.615 13:32:02 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:12:50.615 13:32:02 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:50.615 13:32:02 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:50.615 13:32:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:50.615 13:32:02 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.615 13:32:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.615 ************************************ 00:12:50.615 START TEST bdev_verify 00:12:50.615 ************************************ 00:12:50.615 13:32:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:50.615 [2024-11-20 13:32:02.378259] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:12:50.615 [2024-11-20 13:32:02.378391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61737 ] 00:12:50.615 [2024-11-20 13:32:02.565404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:50.875 [2024-11-20 13:32:02.696040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.875 [2024-11-20 13:32:02.696067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.812 Running I/O for 5 seconds... 00:12:54.125 16640.00 IOPS, 65.00 MiB/s [2024-11-20T13:32:06.673Z] 17536.00 IOPS, 68.50 MiB/s [2024-11-20T13:32:08.049Z] 18133.33 IOPS, 70.83 MiB/s [2024-11-20T13:32:08.614Z] 18048.00 IOPS, 70.50 MiB/s [2024-11-20T13:32:08.614Z] 17510.40 IOPS, 68.40 MiB/s 00:12:56.657 Latency(us) 00:12:56.657 [2024-11-20T13:32:08.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.657 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:56.657 Verification LBA range: start 0x0 length 0xbd0bd 00:12:56.657 Nvme0n1 : 5.10 1405.37 5.49 0.00 0.00 90861.17 15897.09 85065.20 00:12:56.657 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:56.657 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:56.657 Nvme0n1 : 5.07 1490.98 5.82 0.00 0.00 85626.24 17370.99 127176.69 00:12:56.657 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:56.657 Verification LBA range: start 0x0 length 0xa0000 00:12:56.657 Nvme1n1 : 5.10 1404.82 5.49 0.00 0.00 90725.95 18213.22 85486.32 00:12:56.657 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:56.657 Verification LBA range: start 0xa0000 length 0xa0000 00:12:56.657 Nvme1n1 : 5.07 1490.58 5.82 0.00 0.00 85490.31 19055.45 128861.15 00:12:56.657 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:56.657 Verification LBA range: start 0x0 length 0x80000 00:12:56.657 Nvme2n1 : 5.10 1404.29 5.49 0.00 0.00 90558.11 16528.76 85486.32 00:12:56.657 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:56.657 Verification LBA range: start 0x80000 length 0x80000 00:12:56.657 Nvme2n1 : 5.07 1490.19 5.82 0.00 0.00 85245.36 19160.73 127176.69 00:12:56.657 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:56.657 Verification LBA range: start 0x0 length 0x80000 00:12:56.658 Nvme2n2 : 5.11 1403.74 5.48 0.00 0.00 90462.15 16949.87 85486.32 00:12:56.658 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:56.658 Verification LBA range: start 0x80000 length 0x80000 00:12:56.658 Nvme2n2 : 5.07 1489.72 5.82 0.00 0.00 85098.07 18213.22 128861.15 00:12:56.658 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:56.658 Verification LBA range: start 0x0 length 0x80000 00:12:56.658 Nvme2n3 : 5.11 1402.64 5.48 0.00 0.00 90350.59 19371.28 86749.66 00:12:56.658 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:56.658 Verification LBA range: start 0x80000 length 0x80000 00:12:56.658 Nvme2n3 : 5.07 1489.27 5.82 0.00 0.00 84946.40 17370.99 130545.61 00:12:56.658 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:56.658 Verification LBA range: start 0x0 length 0x20000 00:12:56.658 Nvme3n1 : 5.11 1401.52 5.47 0.00 0.00 90212.65 16423.48 88013.01 00:12:56.658 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:56.658 Verification LBA range: start 0x20000 length 0x20000 00:12:56.658 Nvme3n1 : 5.08 1498.87 5.85 0.00 0.00 84293.32 3474.20 128861.15 00:12:56.658 [2024-11-20T13:32:08.615Z] =================================================================================================================== 00:12:56.658 [2024-11-20T13:32:08.615Z] Total : 17371.98 67.86 0.00 0.00 87749.43 3474.20 130545.61 00:12:58.561 00:12:58.561 real 0m7.750s 00:12:58.561 user 0m14.268s 00:12:58.561 sys 0m0.337s 00:12:58.561 13:32:10 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.561 ************************************ 00:12:58.561 END TEST bdev_verify 00:12:58.561 ************************************ 00:12:58.561 13:32:10 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:58.561 13:32:10 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:58.561 13:32:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:58.561 13:32:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.561 13:32:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.561 ************************************ 00:12:58.561 START TEST bdev_verify_big_io 00:12:58.561 ************************************ 00:12:58.561 13:32:10 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:58.561 [2024-11-20 13:32:10.218934] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:12:58.561 [2024-11-20 13:32:10.219086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61845 ] 00:12:58.561 [2024-11-20 13:32:10.409300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:58.820 [2024-11-20 13:32:10.529544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.820 [2024-11-20 13:32:10.529586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.818 Running I/O for 5 seconds... 00:13:04.264 1965.00 IOPS, 122.81 MiB/s [2024-11-20T13:32:16.788Z] 2776.50 IOPS, 173.53 MiB/s [2024-11-20T13:32:17.047Z] 2633.67 IOPS, 164.60 MiB/s [2024-11-20T13:32:17.625Z] 2685.75 IOPS, 167.86 MiB/s 00:13:05.668 Latency(us) 00:13:05.668 [2024-11-20T13:32:17.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.668 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:05.668 Verification LBA range: start 0x0 length 0xbd0b 00:13:05.668 Nvme0n1 : 5.56 159.06 9.94 0.00 0.00 772896.48 16634.04 1037627.01 00:13:05.668 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:05.668 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:05.668 Nvme0n1 : 5.50 164.81 10.30 0.00 0.00 738139.98 24214.10 714210.80 00:13:05.668 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:05.668 Verification LBA range: start 0x0 length 0xa000 00:13:05.668 Nvme1n1 : 5.56 161.15 10.07 0.00 0.00 724636.33 50533.78 731055.40 00:13:05.668 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:05.668 Verification LBA range: start 0xa000 length 0xa000 00:13:05.668 Nvme1n1 : 5.51 174.30 10.89 0.00 0.00 708383.68 7316.87 680521.61 00:13:05.668 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:05.668 Verification LBA range: start 0x0 length 0x8000 00:13:05.668 Nvme2n1 : 5.70 175.67 10.98 0.00 0.00 646095.89 26951.35 643463.51 00:13:05.668 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:05.668 Verification LBA range: start 0x8000 length 0x8000 00:13:05.668 Nvme2n1 : 5.53 181.49 11.34 0.00 0.00 674735.25 6737.84 690628.37 00:13:05.668 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:05.668 Verification LBA range: start 0x0 length 0x8000 00:13:05.668 Nvme2n2 : 5.80 194.73 12.17 0.00 0.00 569801.76 19476.56 656939.18 00:13:05.668 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:05.668 Verification LBA range: start 0x8000 length 0x8000 00:13:05.668 Nvme2n2 : 5.53 181.46 11.34 0.00 0.00 663574.34 6948.40 704104.04 00:13:05.668 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:05.668 Verification LBA range: start 0x0 length 0x8000 00:13:05.668 Nvme2n3 : 5.87 213.63 13.35 0.00 0.00 509055.23 14739.02 680521.61 00:13:05.668 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:05.668 Verification LBA range: start 0x8000 length 0x8000 00:13:05.668 Nvme2n3 : 5.53 181.78 11.36 0.00 0.00 651229.33 6843.12 720948.64 00:13:05.668 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:05.668 Verification LBA range: start 0x0 length 0x2000 00:13:05.669 Nvme3n1 : 5.93 258.91 16.18 0.00 0.00 412648.76 611.93 704104.04 00:13:05.669 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:05.669 Verification LBA range: start 0x2000 length 0x2000 00:13:05.669 Nvme3n1 : 5.53 181.74 11.36 0.00 0.00 640086.55 7843.26 758006.75 00:13:05.669 [2024-11-20T13:32:17.626Z] =================================================================================================================== 00:13:05.669 [2024-11-20T13:32:17.626Z] Total : 2228.73 139.30 0.00 0.00 626587.74 611.93 1037627.01 00:13:07.581 00:13:07.581 real 0m9.189s 00:13:07.581 user 0m17.110s 00:13:07.581 sys 0m0.344s 00:13:07.581 13:32:19 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.581 ************************************ 00:13:07.581 END TEST bdev_verify_big_io 00:13:07.581 ************************************ 00:13:07.581 13:32:19 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.581 13:32:19 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:07.581 13:32:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:07.581 13:32:19 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.581 13:32:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.581 ************************************ 00:13:07.581 START TEST bdev_write_zeroes 00:13:07.581 ************************************ 00:13:07.581 13:32:19 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:07.582 [2024-11-20 13:32:19.471101] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:13:07.582 [2024-11-20 13:32:19.471242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61961 ] 00:13:07.840 [2024-11-20 13:32:19.652350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.840 [2024-11-20 13:32:19.773005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.774 Running I/O for 1 seconds... 00:13:09.707 75648.00 IOPS, 295.50 MiB/s 00:13:09.707 Latency(us) 00:13:09.707 [2024-11-20T13:32:21.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.707 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:09.707 Nvme0n1 : 1.02 12582.60 49.15 0.00 0.00 10147.89 8211.74 22634.92 00:13:09.707 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:09.707 Nvme1n1 : 1.02 12570.02 49.10 0.00 0.00 10145.73 8527.58 22740.20 00:13:09.707 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:09.707 Nvme2n1 : 1.02 12557.90 49.05 0.00 0.00 10115.92 8159.10 20108.23 00:13:09.707 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:09.707 Nvme2n2 : 1.02 12597.06 49.21 0.00 0.00 10047.82 5553.45 18423.78 00:13:09.707 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:09.707 Nvme2n3 : 1.02 12585.70 49.16 0.00 0.00 10034.77 5764.01 18213.22 00:13:09.707 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:09.707 Nvme3n1 : 1.02 12574.39 49.12 0.00 0.00 10025.11 6053.53 18318.50 00:13:09.707 [2024-11-20T13:32:21.664Z] =================================================================================================================== 00:13:09.707 [2024-11-20T13:32:21.664Z] Total : 75467.66 294.80 0.00 0.00 10086.08 5553.45 22740.20 00:13:11.085 00:13:11.085 real 0m3.312s 00:13:11.085 user 0m2.929s 00:13:11.085 sys 0m0.266s 00:13:11.085 13:32:22 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.085 13:32:22 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:11.085 ************************************ 00:13:11.085 END TEST bdev_write_zeroes 00:13:11.085 ************************************ 00:13:11.085 13:32:22 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:11.085 13:32:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:11.085 13:32:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.085 13:32:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.085 ************************************ 00:13:11.085 START TEST bdev_json_nonenclosed 00:13:11.085 ************************************ 00:13:11.085 13:32:22 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:11.085 [2024-11-20 13:32:22.851385] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:13:11.085 [2024-11-20 13:32:22.851530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62014 ] 00:13:11.085 [2024-11-20 13:32:23.031954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.345 [2024-11-20 13:32:23.150667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.345 [2024-11-20 13:32:23.150773] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:11.345 [2024-11-20 13:32:23.150796] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:11.345 [2024-11-20 13:32:23.150808] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:11.603 00:13:11.603 real 0m0.654s 00:13:11.603 user 0m0.395s 00:13:11.603 sys 0m0.155s 00:13:11.603 13:32:23 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.603 13:32:23 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:11.603 ************************************ 00:13:11.603 END TEST bdev_json_nonenclosed 00:13:11.603 ************************************ 00:13:11.603 13:32:23 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:11.603 13:32:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:11.603 13:32:23 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.603 13:32:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.603 ************************************ 00:13:11.603 START TEST bdev_json_nonarray 00:13:11.603 ************************************ 00:13:11.603 13:32:23 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:11.862 [2024-11-20 13:32:23.581412] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:13:11.862 [2024-11-20 13:32:23.581557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62045 ] 00:13:11.862 [2024-11-20 13:32:23.765919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.121 [2024-11-20 13:32:23.882638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.121 [2024-11-20 13:32:23.882754] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:12.121 [2024-11-20 13:32:23.882777] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:12.121 [2024-11-20 13:32:23.882789] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:12.380 00:13:12.380 real 0m0.646s 00:13:12.380 user 0m0.415s 00:13:12.380 sys 0m0.127s 00:13:12.380 13:32:24 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.380 13:32:24 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:12.380 ************************************ 00:13:12.380 END TEST bdev_json_nonarray 00:13:12.380 ************************************ 00:13:12.380 13:32:24 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:13:12.380 13:32:24 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:13:12.380 13:32:24 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:13:12.380 13:32:24 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:13:12.380 13:32:24 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:13:12.380 13:32:24 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:12.380 13:32:24 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:12.380 13:32:24 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:13:12.380 13:32:24 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:13:12.380 13:32:24 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:13:12.380 13:32:24 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:13:12.380 ************************************ 00:13:12.380 END TEST blockdev_nvme 00:13:12.380 ************************************ 00:13:12.380 00:13:12.380 real 0m44.367s 00:13:12.380 user 1m5.557s 00:13:12.380 sys 0m8.000s 00:13:12.380 13:32:24 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.380 13:32:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:12.380 13:32:24 -- spdk/autotest.sh@209 -- # uname -s 00:13:12.380 13:32:24 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:13:12.380 13:32:24 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:13:12.380 13:32:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:12.380 13:32:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.380 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:13:12.380 ************************************ 00:13:12.380 START TEST blockdev_nvme_gpt 00:13:12.380 ************************************ 00:13:12.380 13:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:13:12.640 * Looking for test storage... 00:13:12.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:12.640 13:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:12.640 13:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:13:12.640 13:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:12.640 13:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:12.640 13:32:24 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:13:12.640 13:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:12.640 13:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:12.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.640 --rc genhtml_branch_coverage=1 00:13:12.640 --rc genhtml_function_coverage=1 00:13:12.640 --rc genhtml_legend=1 00:13:12.640 --rc geninfo_all_blocks=1 00:13:12.640 --rc geninfo_unexecuted_blocks=1 00:13:12.640 00:13:12.640 ' 00:13:12.640 13:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:12.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.640 --rc genhtml_branch_coverage=1 00:13:12.640 --rc genhtml_function_coverage=1 00:13:12.640 --rc genhtml_legend=1 00:13:12.640 --rc geninfo_all_blocks=1 00:13:12.640 --rc geninfo_unexecuted_blocks=1 00:13:12.640 00:13:12.640 ' 00:13:12.640 13:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:12.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.640 --rc genhtml_branch_coverage=1 00:13:12.640 --rc genhtml_function_coverage=1 00:13:12.640 --rc genhtml_legend=1 00:13:12.640 --rc geninfo_all_blocks=1 00:13:12.640 --rc geninfo_unexecuted_blocks=1 00:13:12.640 00:13:12.640 ' 00:13:12.640 13:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:12.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.640 --rc genhtml_branch_coverage=1 00:13:12.640 --rc genhtml_function_coverage=1 00:13:12.640 --rc genhtml_legend=1 00:13:12.640 --rc geninfo_all_blocks=1 00:13:12.640 --rc geninfo_unexecuted_blocks=1 00:13:12.640 00:13:12.640 ' 00:13:12.640 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:12.640 13:32:24 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:13:12.640 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:12.640 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:12.640 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:12.640 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62129 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:12.641 13:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62129 00:13:12.641 13:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62129 ']' 00:13:12.641 13:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.641 13:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.641 13:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.641 13:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.641 13:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:12.900 [2024-11-20 13:32:24.716233] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:13:12.900 [2024-11-20 13:32:24.716568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62129 ] 00:13:13.159 [2024-11-20 13:32:24.897766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.159 [2024-11-20 13:32:25.020964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.096 13:32:25 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.096 13:32:25 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:13:14.096 13:32:25 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:13:14.096 13:32:25 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:13:14.096 13:32:25 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:14.664 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:14.922 Waiting for block devices as requested 00:13:14.922 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:15.200 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:15.200 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:15.200 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:20.468 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:20.468 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:20.468 13:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:20.468 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:13:20.468 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:13:20.468 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:13:20.468 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:13:20.468 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:13:20.468 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:13:20.468 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:13:20.468 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:13:20.468 BYT; 00:13:20.468 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:13:20.468 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:13:20.468 BYT; 00:13:20.468 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:13:20.468 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:13:20.468 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:13:20.468 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:13:20.468 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:13:20.468 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:13:20.468 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:13:20.468 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:13:20.468 13:32:32 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:13:20.468 13:32:32 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:13:20.468 13:32:32 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:20.468 13:32:32 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:13:20.468 13:32:32 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:13:20.468 13:32:32 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:20.468 13:32:32 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:13:20.468 13:32:32 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:13:20.468 13:32:32 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:13:20.468 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:13:20.468 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:13:20.468 13:32:32 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:13:20.468 13:32:32 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:13:20.469 13:32:32 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:20.469 13:32:32 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:13:20.469 13:32:32 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:13:20.469 13:32:32 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:20.469 13:32:32 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:13:20.469 13:32:32 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:13:20.469 13:32:32 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:13:20.469 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:13:20.469 13:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:13:21.841 The operation has completed successfully. 00:13:21.841 13:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:13:22.775 The operation has completed successfully. 00:13:22.775 13:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:23.342 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:24.281 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:24.281 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:24.281 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:24.281 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:24.281 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:13:24.281 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.281 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:24.281 [] 00:13:24.282 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.282 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:13:24.282 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:13:24.282 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:13:24.282 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:24.282 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:13:24.282 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.282 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:24.880 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.880 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:13:24.880 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.880 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:24.880 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.880 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:13:24.880 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:13:24.880 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.880 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:24.880 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.880 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:13:24.880 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.880 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:24.880 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.880 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:24.880 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.880 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:24.880 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.880 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:13:24.880 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:13:24.880 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.880 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:24.880 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:13:24.880 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.880 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:13:24.880 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:13:24.881 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "1c92b36f-0c03-4de3-9ac2-11b23c50ea14"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1c92b36f-0c03-4de3-9ac2-11b23c50ea14",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "b2c72d6b-4aa2-4a31-bccc-4ab0c3c3fc87"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b2c72d6b-4aa2-4a31-bccc-4ab0c3c3fc87",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "42edcb0c-b95a-4c90-a177-9ec7189c0491"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "42edcb0c-b95a-4c90-a177-9ec7189c0491",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "fff00a71-51d2-472a-a9fd-6f291cdb5299"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fff00a71-51d2-472a-a9fd-6f291cdb5299",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "4a301844-2e65-4eb7-96da-f32d87e1049e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4a301844-2e65-4eb7-96da-f32d87e1049e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:13:24.881 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:13:24.881 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:13:24.881 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:13:24.881 13:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62129 00:13:24.881 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62129 ']' 00:13:24.881 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62129 00:13:24.881 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:13:24.881 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.881 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62129 00:13:25.140 killing process with pid 62129 00:13:25.140 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:25.140 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:25.140 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62129' 00:13:25.140 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62129 00:13:25.140 13:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62129 00:13:27.674 13:32:39 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:27.674 13:32:39 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:13:27.674 13:32:39 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:27.674 13:32:39 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.674 13:32:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:27.674 ************************************ 00:13:27.674 START TEST bdev_hello_world 00:13:27.674 ************************************ 00:13:27.674 13:32:39 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:13:27.674 [2024-11-20 13:32:39.466371] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:13:27.674 [2024-11-20 13:32:39.466534] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62776 ] 00:13:27.932 [2024-11-20 13:32:39.652211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.932 [2024-11-20 13:32:39.773896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.869 [2024-11-20 13:32:40.455756] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:28.869 [2024-11-20 13:32:40.455822] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:13:28.869 [2024-11-20 13:32:40.455856] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:28.869 [2024-11-20 13:32:40.459086] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:28.869 [2024-11-20 13:32:40.459715] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:28.869 [2024-11-20 13:32:40.459751] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:28.869 [2024-11-20 13:32:40.459961] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:28.869 00:13:28.869 [2024-11-20 13:32:40.459987] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:29.828 00:13:29.828 real 0m2.259s 00:13:29.828 user 0m1.887s 00:13:29.828 sys 0m0.260s 00:13:29.828 13:32:41 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.828 ************************************ 00:13:29.828 END TEST bdev_hello_world 00:13:29.828 ************************************ 00:13:29.828 13:32:41 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:29.828 13:32:41 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:13:29.828 13:32:41 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:29.828 13:32:41 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.828 13:32:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:29.828 ************************************ 00:13:29.828 START TEST bdev_bounds 00:13:29.828 ************************************ 00:13:29.828 13:32:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:13:29.828 Process bdevio pid: 62818 00:13:29.828 13:32:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62818 00:13:29.828 13:32:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:29.828 13:32:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:29.828 13:32:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62818' 00:13:29.828 13:32:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62818 00:13:29.828 13:32:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62818 ']' 00:13:29.828 13:32:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.828 13:32:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.828 13:32:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.828 13:32:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.828 13:32:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:29.828 [2024-11-20 13:32:41.779964] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:13:29.828 [2024-11-20 13:32:41.780325] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62818 ] 00:13:30.088 [2024-11-20 13:32:41.965909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:30.347 [2024-11-20 13:32:42.097389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.347 [2024-11-20 13:32:42.097458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.347 [2024-11-20 13:32:42.097491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.916 13:32:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.916 13:32:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:13:30.916 13:32:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:31.175 I/O targets: 00:13:31.175 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:31.175 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:13:31.175 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:13:31.175 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:31.175 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:31.175 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:31.175 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:31.175 00:13:31.175 00:13:31.175 CUnit - A unit testing framework for C - Version 2.1-3 00:13:31.175 http://cunit.sourceforge.net/ 00:13:31.175 00:13:31.175 00:13:31.175 Suite: bdevio tests on: Nvme3n1 00:13:31.175 Test: blockdev write read block ...passed 00:13:31.175 Test: blockdev write zeroes read block ...passed 00:13:31.175 Test: blockdev write zeroes read no split ...passed 00:13:31.175 Test: blockdev write zeroes read split ...passed 00:13:31.175 Test: blockdev write zeroes read split partial ...passed 00:13:31.175 Test: blockdev reset ...[2024-11-20 13:32:42.996709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:13:31.175 [2024-11-20 13:32:43.000887] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:13:31.175 passed 00:13:31.175 Test: blockdev write read 8 blocks ...passed 00:13:31.175 Test: blockdev write read size > 128k ...passed 00:13:31.175 Test: blockdev write read invalid size ...passed 00:13:31.175 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.176 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.176 Test: blockdev write read max offset ...passed 00:13:31.176 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.176 Test: blockdev writev readv 8 blocks ...passed 00:13:31.176 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.176 Test: blockdev writev readv block ...passed 00:13:31.176 Test: blockdev writev readv size > 128k ...passed 00:13:31.176 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.176 Test: blockdev comparev and writev ...[2024-11-20 13:32:43.011041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1a04000 len:0x1000 00:13:31.176 [2024-11-20 13:32:43.011096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:31.176 passed 00:13:31.176 Test: blockdev nvme passthru rw ...passed 00:13:31.176 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:32:43.011967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:13:31.176 Test: blockdev nvme admin passthru ...RP2 0x0 00:13:31.176 [2024-11-20 13:32:43.012096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:31.176 passed 00:13:31.176 Test: blockdev copy ...passed 00:13:31.176 Suite: bdevio tests on: Nvme2n3 00:13:31.176 Test: blockdev write read block ...passed 00:13:31.176 Test: blockdev write zeroes read block ...passed 00:13:31.176 Test: blockdev write zeroes read no split ...passed 00:13:31.176 Test: blockdev write zeroes read split ...passed 00:13:31.176 Test: blockdev write zeroes read split partial ...passed 00:13:31.176 Test: blockdev reset ...[2024-11-20 13:32:43.123800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:31.176 [2024-11-20 13:32:43.128505] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:13:31.176 passed 00:13:31.176 Test: blockdev write read 8 blocks ...passed 00:13:31.176 Test: blockdev write read size > 128k ...passed 00:13:31.176 Test: blockdev write read invalid size ...passed 00:13:31.176 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.176 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.176 Test: blockdev write read max offset ...passed 00:13:31.176 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.434 Test: blockdev writev readv 8 blocks ...passed 00:13:31.434 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.434 Test: blockdev writev readv block ...passed 00:13:31.434 Test: blockdev writev readv size > 128k ...passed 00:13:31.434 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.434 Test: blockdev comparev and writev ...[2024-11-20 13:32:43.137328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:13:31.434 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2b1a02000 len:0x1000 00:13:31.434 [2024-11-20 13:32:43.137521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:31.434 passed 00:13:31.434 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.434 Test: blockdev nvme admin passthru ...[2024-11-20 13:32:43.138382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:31.434 [2024-11-20 13:32:43.138433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:31.434 passed 00:13:31.434 Test: blockdev copy ...passed 00:13:31.434 Suite: bdevio tests on: Nvme2n2 00:13:31.434 Test: blockdev write read block ...passed 00:13:31.434 Test: blockdev write zeroes read block ...passed 00:13:31.434 Test: blockdev write zeroes read no split ...passed 00:13:31.434 Test: blockdev write zeroes read split ...passed 00:13:31.435 Test: blockdev write zeroes read split partial ...passed 00:13:31.435 Test: blockdev reset ...[2024-11-20 13:32:43.220830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:31.435 [2024-11-20 13:32:43.225318] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:13:31.435 00:13:31.435 Test: blockdev write read 8 blocks ...passed 00:13:31.435 Test: blockdev write read size > 128k ...passed 00:13:31.435 Test: blockdev write read invalid size ...passed 00:13:31.435 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.435 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.435 Test: blockdev write read max offset ...passed 00:13:31.435 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.435 Test: blockdev writev readv 8 blocks ...passed 00:13:31.435 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.435 Test: blockdev writev readv block ...passed 00:13:31.435 Test: blockdev writev readv size > 128k ...passed 00:13:31.435 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.435 Test: blockdev comparev and writev ...[2024-11-20 13:32:43.234976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5838000 len:0x1000 00:13:31.435 [2024-11-20 13:32:43.235171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:31.435 passed 00:13:31.435 Test: blockdev nvme passthru rw ...passed 00:13:31.435 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:32:43.236233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:31.435 [2024-11-20 13:32:43.236432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:13:31.435 00:13:31.435 Test: blockdev nvme admin passthru ...passed 00:13:31.435 Test: blockdev copy ...passed 00:13:31.435 Suite: bdevio tests on: Nvme2n1 00:13:31.435 Test: blockdev write read block ...passed 00:13:31.435 Test: blockdev write zeroes read block ...passed 00:13:31.435 Test: blockdev write zeroes read no split ...passed 00:13:31.435 Test: blockdev write zeroes read split ...passed 00:13:31.435 Test: blockdev write zeroes read split partial ...passed 00:13:31.435 Test: blockdev reset ...[2024-11-20 13:32:43.317814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:31.435 passed 00:13:31.435 Test: blockdev write read 8 blocks ...[2024-11-20 13:32:43.322461] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:13:31.435 passed 00:13:31.435 Test: blockdev write read size > 128k ...passed 00:13:31.435 Test: blockdev write read invalid size ...passed 00:13:31.435 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.435 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.435 Test: blockdev write read max offset ...passed 00:13:31.435 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.435 Test: blockdev writev readv 8 blocks ...passed 00:13:31.435 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.435 Test: blockdev writev readv block ...passed 00:13:31.435 Test: blockdev writev readv size > 128k ...passed 00:13:31.435 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.435 Test: blockdev comparev and writev ...[2024-11-20 13:32:43.331712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5834000 len:0x1000 00:13:31.435 [2024-11-20 13:32:43.331767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:31.435 passed 00:13:31.435 Test: blockdev nvme passthru rw ...passed 00:13:31.435 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:32:43.332558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:31.435 [2024-11-20 13:32:43.332592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:31.435 passed 00:13:31.435 Test: blockdev nvme admin passthru ...passed 00:13:31.435 Test: blockdev copy ...passed 00:13:31.435 Suite: bdevio tests on: Nvme1n1p2 00:13:31.435 Test: blockdev write read block ...passed 00:13:31.435 Test: blockdev write zeroes read block ...passed 00:13:31.435 Test: blockdev write zeroes read no split ...passed 00:13:31.435 Test: blockdev write zeroes read split ...passed 00:13:31.695 Test: blockdev write zeroes read split partial ...passed 00:13:31.695 Test: blockdev reset ...[2024-11-20 13:32:43.434317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:13:31.695 [2024-11-20 13:32:43.438464] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:13:31.695 passed 00:13:31.695 Test: blockdev write read 8 blocks ...passed 00:13:31.695 Test: blockdev write read size > 128k ...passed 00:13:31.695 Test: blockdev write read invalid size ...passed 00:13:31.695 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.695 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.695 Test: blockdev write read max offset ...passed 00:13:31.695 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.695 Test: blockdev writev readv 8 blocks ...passed 00:13:31.695 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.695 Test: blockdev writev readv block ...passed 00:13:31.695 Test: blockdev writev readv size > 128k ...passed 00:13:31.695 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.695 Test: blockdev comparev and writev ...[2024-11-20 13:32:43.448471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c5830000 len:0x1000 00:13:31.695 [2024-11-20 13:32:43.448683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:31.695 passed 00:13:31.695 Test: blockdev nvme passthru rw ...passed 00:13:31.695 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.695 Test: blockdev nvme admin passthru ...passed 00:13:31.695 Test: blockdev copy ...passed 00:13:31.695 Suite: bdevio tests on: Nvme1n1p1 00:13:31.695 Test: blockdev write read block ...passed 00:13:31.695 Test: blockdev write zeroes read block ...passed 00:13:31.695 Test: blockdev write zeroes read no split ...passed 00:13:31.695 Test: blockdev write zeroes read split ...passed 00:13:31.695 Test: blockdev write zeroes read split partial ...passed 00:13:31.695 Test: blockdev reset ...[2024-11-20 13:32:43.519652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:13:31.695 passed 00:13:31.695 Test: blockdev write read 8 blocks ...[2024-11-20 13:32:43.523790] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:13:31.695 passed 00:13:31.695 Test: blockdev write read size > 128k ...passed 00:13:31.695 Test: blockdev write read invalid size ...passed 00:13:31.695 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.695 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.695 Test: blockdev write read max offset ...passed 00:13:31.695 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.695 Test: blockdev writev readv 8 blocks ...passed 00:13:31.696 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.696 Test: blockdev writev readv block ...passed 00:13:31.696 Test: blockdev writev readv size > 128k ...passed 00:13:31.696 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.696 Test: blockdev comparev and writev ...[2024-11-20 13:32:43.531751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:passed 00:13:31.696 Test: blockdev nvme passthru rw ...passed 00:13:31.696 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.696 Test: blockdev nvme admin passthru ...passed 00:13:31.696 Test: blockdev copy ...1 SGL DATA BLOCK ADDRESS 0x2b1c0e000 len:0x1000 00:13:31.696 [2024-11-20 13:32:43.531919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:31.696 passed 00:13:31.696 Suite: bdevio tests on: Nvme0n1 00:13:31.696 Test: blockdev write read block ...passed 00:13:31.696 Test: blockdev write zeroes read block ...passed 00:13:31.696 Test: blockdev write zeroes read no split ...passed 00:13:31.696 Test: blockdev write zeroes read split ...passed 00:13:31.696 Test: blockdev write zeroes read split partial ...passed 00:13:31.696 Test: blockdev reset ...[2024-11-20 13:32:43.601809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:13:31.696 passed 00:13:31.696 Test: blockdev write read 8 blocks ...[2024-11-20 13:32:43.605803] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:13:31.696 passed 00:13:31.696 Test: blockdev write read size > 128k ...passed 00:13:31.696 Test: blockdev write read invalid size ...passed 00:13:31.696 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.696 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.696 Test: blockdev write read max offset ...passed 00:13:31.696 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.696 Test: blockdev writev readv 8 blocks ...passed 00:13:31.696 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.696 Test: blockdev writev readv block ...passed 00:13:31.696 Test: blockdev writev readv size > 128k ...passed 00:13:31.696 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.696 Test: blockdev comparev and writev ...passed 00:13:31.696 Test: blockdev nvme passthru rw ...[2024-11-20 13:32:43.613574] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:13:31.696 separate metadata which is not supported yet. 00:13:31.696 passed 00:13:31.696 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:32:43.614259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:13:31.696 [2024-11-20 13:32:43.614307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:13:31.696 passed 00:13:31.696 Test: blockdev nvme admin passthru ...passed 00:13:31.696 Test: blockdev copy ...passed 00:13:31.696 00:13:31.696 Run Summary: Type Total Ran Passed Failed Inactive 00:13:31.696 suites 7 7 n/a 0 0 00:13:31.696 tests 161 161 161 0 0 00:13:31.696 asserts 1025 1025 1025 0 n/a 00:13:31.696 00:13:31.696 Elapsed time = 1.892 seconds 00:13:31.696 0 00:13:31.696 13:32:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62818 00:13:31.696 13:32:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62818 ']' 00:13:31.696 13:32:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62818 00:13:31.954 13:32:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:13:31.954 13:32:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.954 13:32:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62818 00:13:31.954 13:32:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:31.954 13:32:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:31.954 13:32:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62818' 00:13:31.954 killing process with pid 62818 00:13:31.954 13:32:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62818 00:13:31.954 13:32:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62818 00:13:32.889 13:32:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:32.889 00:13:32.889 real 0m3.114s 00:13:32.889 user 0m8.094s 00:13:32.889 sys 0m0.454s 00:13:32.889 13:32:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.889 ************************************ 00:13:32.889 END TEST bdev_bounds 00:13:32.889 13:32:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:32.889 ************************************ 00:13:33.148 13:32:44 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:33.148 13:32:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:33.148 13:32:44 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.148 13:32:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:33.148 ************************************ 00:13:33.148 START TEST bdev_nbd 00:13:33.148 ************************************ 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62889 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62889 /var/tmp/spdk-nbd.sock 00:13:33.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62889 ']' 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.148 13:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:33.148 [2024-11-20 13:32:44.978945] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:13:33.148 [2024-11-20 13:32:44.979130] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.407 [2024-11-20 13:32:45.174147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.407 [2024-11-20 13:32:45.297109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.365 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:34.366 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.628 1+0 records in 00:13:34.628 1+0 records out 00:13:34.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000798355 s, 5.1 MB/s 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:34.628 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.886 1+0 records in 00:13:34.886 1+0 records out 00:13:34.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511512 s, 8.0 MB/s 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:34.886 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:13:35.144 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:35.144 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:35.144 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:35.144 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.144 1+0 records in 00:13:35.144 1+0 records out 00:13:35.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517444 s, 7.9 MB/s 00:13:35.144 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.144 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:35.144 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.144 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:35.144 13:32:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:35.144 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:35.144 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:35.144 13:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:13:35.144 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:35.144 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:35.144 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:35.144 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:13:35.144 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:35.144 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:35.144 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:35.144 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:13:35.402 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:35.402 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:35.402 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:35.402 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.402 1+0 records in 00:13:35.402 1+0 records out 00:13:35.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000717774 s, 5.7 MB/s 00:13:35.402 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.402 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:35.402 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.402 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:35.402 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:35.402 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:35.402 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:35.402 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:13:35.402 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:35.402 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:35.660 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:35.660 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:13:35.660 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:35.660 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:35.660 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:35.660 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:13:35.660 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:35.660 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:35.660 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:35.660 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.660 1+0 records in 00:13:35.660 1+0 records out 00:13:35.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000720096 s, 5.7 MB/s 00:13:35.660 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.660 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:35.660 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.660 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:35.660 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:35.660 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:35.660 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:35.660 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:13:35.919 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:35.919 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:35.919 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:35.919 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:13:35.919 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:35.919 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:35.919 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:35.919 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:13:35.919 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:35.919 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:35.919 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:35.919 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.919 1+0 records in 00:13:35.919 1+0 records out 00:13:35.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00086588 s, 4.7 MB/s 00:13:35.919 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.919 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:35.919 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.919 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:35.919 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:35.919 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:35.919 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:35.920 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.179 1+0 records in 00:13:36.179 1+0 records out 00:13:36.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582509 s, 7.0 MB/s 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:36.179 13:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:36.439 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:36.439 { 00:13:36.439 "nbd_device": "/dev/nbd0", 00:13:36.439 "bdev_name": "Nvme0n1" 00:13:36.439 }, 00:13:36.439 { 00:13:36.439 "nbd_device": "/dev/nbd1", 00:13:36.439 "bdev_name": "Nvme1n1p1" 00:13:36.439 }, 00:13:36.439 { 00:13:36.439 "nbd_device": "/dev/nbd2", 00:13:36.439 "bdev_name": "Nvme1n1p2" 00:13:36.439 }, 00:13:36.439 { 00:13:36.439 "nbd_device": "/dev/nbd3", 00:13:36.439 "bdev_name": "Nvme2n1" 00:13:36.439 }, 00:13:36.439 { 00:13:36.439 "nbd_device": "/dev/nbd4", 00:13:36.439 "bdev_name": "Nvme2n2" 00:13:36.439 }, 00:13:36.439 { 00:13:36.439 "nbd_device": "/dev/nbd5", 00:13:36.439 "bdev_name": "Nvme2n3" 00:13:36.439 }, 00:13:36.439 { 00:13:36.439 "nbd_device": "/dev/nbd6", 00:13:36.439 "bdev_name": "Nvme3n1" 00:13:36.439 } 00:13:36.439 ]' 00:13:36.439 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:36.439 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:36.439 { 00:13:36.439 "nbd_device": "/dev/nbd0", 00:13:36.439 "bdev_name": "Nvme0n1" 00:13:36.439 }, 00:13:36.439 { 00:13:36.439 "nbd_device": "/dev/nbd1", 00:13:36.439 "bdev_name": "Nvme1n1p1" 00:13:36.439 }, 00:13:36.439 { 00:13:36.439 "nbd_device": "/dev/nbd2", 00:13:36.439 "bdev_name": "Nvme1n1p2" 00:13:36.439 }, 00:13:36.439 { 00:13:36.439 "nbd_device": "/dev/nbd3", 00:13:36.439 "bdev_name": "Nvme2n1" 00:13:36.439 }, 00:13:36.439 { 00:13:36.439 "nbd_device": "/dev/nbd4", 00:13:36.439 "bdev_name": "Nvme2n2" 00:13:36.439 }, 00:13:36.439 { 00:13:36.439 "nbd_device": "/dev/nbd5", 00:13:36.439 "bdev_name": "Nvme2n3" 00:13:36.439 }, 00:13:36.439 { 00:13:36.439 "nbd_device": "/dev/nbd6", 00:13:36.439 "bdev_name": "Nvme3n1" 00:13:36.439 } 00:13:36.439 ]' 00:13:36.439 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:36.439 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:13:36.439 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:36.439 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:13:36.439 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:36.439 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:36.439 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.439 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:36.439 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:36.439 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:36.440 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:36.440 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.440 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.440 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:36.699 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:36.699 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.699 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.699 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:36.699 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:36.699 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:36.699 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:36.699 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.699 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.699 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:36.699 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:36.699 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.699 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.699 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:36.958 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:36.958 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:36.958 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:36.958 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.958 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.958 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:36.958 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:36.958 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.958 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.958 13:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:37.218 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:37.218 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:37.218 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:37.218 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.218 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.218 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:37.218 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:37.218 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.218 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.218 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:37.475 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:37.475 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:37.475 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:37.475 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.475 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.475 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:37.475 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:37.475 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.475 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.475 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:37.734 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:37.734 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:37.734 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:37.734 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.734 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.734 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:37.734 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:37.734 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.734 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.734 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:37.993 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:37.993 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:37.993 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:37.993 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.993 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.993 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:37.993 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:37.993 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.993 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:37.993 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:37.993 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:37.993 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:37.993 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:37.993 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:38.252 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:38.253 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:38.253 13:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:13:38.253 /dev/nbd0 00:13:38.253 13:32:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:38.511 13:32:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:38.511 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:38.511 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:38.511 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:38.511 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:38.511 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:38.511 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:38.511 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:38.511 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:38.511 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.511 1+0 records in 00:13:38.511 1+0 records out 00:13:38.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000673677 s, 6.1 MB/s 00:13:38.511 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.512 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:38.512 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.512 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:38.512 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:38.512 13:32:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.512 13:32:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:38.512 13:32:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:13:38.512 /dev/nbd1 00:13:38.512 13:32:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:38.770 13:32:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:38.770 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:38.770 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:38.770 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:38.770 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:38.770 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:38.770 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:38.770 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:38.770 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:38.770 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.770 1+0 records in 00:13:38.770 1+0 records out 00:13:38.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000709806 s, 5.8 MB/s 00:13:38.770 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.770 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:38.770 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.770 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:38.771 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:38.771 13:32:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.771 13:32:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:38.771 13:32:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:13:38.771 /dev/nbd10 00:13:39.030 13:32:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:39.030 13:32:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:39.030 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:13:39.030 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:39.030 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:39.030 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:39.030 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:13:39.030 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:39.030 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:39.030 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:39.030 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.030 1+0 records in 00:13:39.030 1+0 records out 00:13:39.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563971 s, 7.3 MB/s 00:13:39.030 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.030 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:39.030 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.030 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:39.030 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:39.030 13:32:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.030 13:32:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:39.030 13:32:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:13:39.030 /dev/nbd11 00:13:39.290 13:32:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:39.290 13:32:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:39.290 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:13:39.290 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:39.290 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:39.290 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:39.290 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:13:39.290 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:39.290 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:39.290 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:39.290 13:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.290 1+0 records in 00:13:39.290 1+0 records out 00:13:39.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624748 s, 6.6 MB/s 00:13:39.290 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.290 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:39.290 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.290 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:39.290 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:39.290 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.290 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:39.290 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:13:39.290 /dev/nbd12 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.551 1+0 records in 00:13:39.551 1+0 records out 00:13:39.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569364 s, 7.2 MB/s 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:13:39.551 /dev/nbd13 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.551 1+0 records in 00:13:39.551 1+0 records out 00:13:39.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103118 s, 4.0 MB/s 00:13:39.551 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.811 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:39.811 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.811 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:39.811 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:39.811 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.811 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:39.811 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:13:39.811 /dev/nbd14 00:13:39.811 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:13:39.811 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:13:39.811 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:13:39.811 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:39.811 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:39.811 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:39.811 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:13:39.811 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:39.811 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:39.811 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:39.812 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.812 1+0 records in 00:13:39.812 1+0 records out 00:13:39.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617211 s, 6.6 MB/s 00:13:39.812 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.812 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:39.812 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.812 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:39.812 13:32:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:39.812 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.812 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:39.812 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:39.812 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:39.812 13:32:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:40.379 { 00:13:40.379 "nbd_device": "/dev/nbd0", 00:13:40.379 "bdev_name": "Nvme0n1" 00:13:40.379 }, 00:13:40.379 { 00:13:40.379 "nbd_device": "/dev/nbd1", 00:13:40.379 "bdev_name": "Nvme1n1p1" 00:13:40.379 }, 00:13:40.379 { 00:13:40.379 "nbd_device": "/dev/nbd10", 00:13:40.379 "bdev_name": "Nvme1n1p2" 00:13:40.379 }, 00:13:40.379 { 00:13:40.379 "nbd_device": "/dev/nbd11", 00:13:40.379 "bdev_name": "Nvme2n1" 00:13:40.379 }, 00:13:40.379 { 00:13:40.379 "nbd_device": "/dev/nbd12", 00:13:40.379 "bdev_name": "Nvme2n2" 00:13:40.379 }, 00:13:40.379 { 00:13:40.379 "nbd_device": "/dev/nbd13", 00:13:40.379 "bdev_name": "Nvme2n3" 00:13:40.379 }, 00:13:40.379 { 00:13:40.379 "nbd_device": "/dev/nbd14", 00:13:40.379 "bdev_name": "Nvme3n1" 00:13:40.379 } 00:13:40.379 ]' 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:40.379 { 00:13:40.379 "nbd_device": "/dev/nbd0", 00:13:40.379 "bdev_name": "Nvme0n1" 00:13:40.379 }, 00:13:40.379 { 00:13:40.379 "nbd_device": "/dev/nbd1", 00:13:40.379 "bdev_name": "Nvme1n1p1" 00:13:40.379 }, 00:13:40.379 { 00:13:40.379 "nbd_device": "/dev/nbd10", 00:13:40.379 "bdev_name": "Nvme1n1p2" 00:13:40.379 }, 00:13:40.379 { 00:13:40.379 "nbd_device": "/dev/nbd11", 00:13:40.379 "bdev_name": "Nvme2n1" 00:13:40.379 }, 00:13:40.379 { 00:13:40.379 "nbd_device": "/dev/nbd12", 00:13:40.379 "bdev_name": "Nvme2n2" 00:13:40.379 }, 00:13:40.379 { 00:13:40.379 "nbd_device": "/dev/nbd13", 00:13:40.379 "bdev_name": "Nvme2n3" 00:13:40.379 }, 00:13:40.379 { 00:13:40.379 "nbd_device": "/dev/nbd14", 00:13:40.379 "bdev_name": "Nvme3n1" 00:13:40.379 } 00:13:40.379 ]' 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:40.379 /dev/nbd1 00:13:40.379 /dev/nbd10 00:13:40.379 /dev/nbd11 00:13:40.379 /dev/nbd12 00:13:40.379 /dev/nbd13 00:13:40.379 /dev/nbd14' 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:40.379 /dev/nbd1 00:13:40.379 /dev/nbd10 00:13:40.379 /dev/nbd11 00:13:40.379 /dev/nbd12 00:13:40.379 /dev/nbd13 00:13:40.379 /dev/nbd14' 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:40.379 256+0 records in 00:13:40.379 256+0 records out 00:13:40.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126417 s, 82.9 MB/s 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:40.379 256+0 records in 00:13:40.379 256+0 records out 00:13:40.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13092 s, 8.0 MB/s 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:40.379 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:40.638 256+0 records in 00:13:40.638 256+0 records out 00:13:40.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142927 s, 7.3 MB/s 00:13:40.638 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:40.638 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:40.638 256+0 records in 00:13:40.638 256+0 records out 00:13:40.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139705 s, 7.5 MB/s 00:13:40.638 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:40.638 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:40.897 256+0 records in 00:13:40.897 256+0 records out 00:13:40.897 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141722 s, 7.4 MB/s 00:13:40.897 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:40.897 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:41.156 256+0 records in 00:13:41.156 256+0 records out 00:13:41.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144632 s, 7.2 MB/s 00:13:41.156 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:41.156 13:32:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:41.156 256+0 records in 00:13:41.156 256+0 records out 00:13:41.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162434 s, 6.5 MB/s 00:13:41.156 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:41.156 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:13:41.415 256+0 records in 00:13:41.415 256+0 records out 00:13:41.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144588 s, 7.3 MB/s 00:13:41.415 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:13:41.415 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:41.415 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:41.415 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:41.415 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:41.415 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:41.415 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:41.415 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:41.415 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:41.415 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:41.416 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:41.416 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:41.416 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:41.416 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:41.416 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:41.416 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:41.416 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:41.416 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:41.416 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:41.416 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:41.416 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:13:41.416 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:41.416 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:13:41.416 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:41.416 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:41.416 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:41.416 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:41.416 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.416 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:41.675 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:41.675 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:41.675 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:41.675 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.675 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.675 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:41.675 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:41.675 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.675 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.675 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:41.933 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:41.933 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:41.933 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:41.933 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.933 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.933 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:41.933 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:41.933 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.933 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.933 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:42.192 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:42.192 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:42.192 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:42.192 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.192 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.192 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:42.192 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:42.192 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.192 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.192 13:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:42.452 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:42.452 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:42.452 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:42.452 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.452 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.452 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:42.452 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:42.452 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.452 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.452 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.711 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:42.970 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:42.970 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:42.970 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:42.970 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.970 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.970 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:42.970 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:42.970 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.970 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:42.970 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:42.970 13:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:43.230 13:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:43.230 13:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:43.230 13:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:43.230 13:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:43.230 13:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:43.230 13:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:43.230 13:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:43.230 13:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:43.230 13:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:43.230 13:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:43.230 13:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:43.230 13:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:43.230 13:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:43.230 13:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:43.230 13:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:43.230 13:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:43.488 malloc_lvol_verify 00:13:43.488 13:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:43.748 1de6f699-e4f0-41d4-b487-d5115ba3f391 00:13:43.748 13:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:44.007 0963768a-53a9-4bf1-a7d2-1e86781ec65f 00:13:44.007 13:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:44.273 /dev/nbd0 00:13:44.273 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:44.273 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:44.273 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:44.273 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:44.273 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:44.273 mke2fs 1.47.0 (5-Feb-2023) 00:13:44.273 Discarding device blocks: 0/4096 done 00:13:44.273 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:44.273 00:13:44.273 Allocating group tables: 0/1 done 00:13:44.273 Writing inode tables: 0/1 done 00:13:44.273 Creating journal (1024 blocks): done 00:13:44.273 Writing superblocks and filesystem accounting information: 0/1 done 00:13:44.273 00:13:44.273 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:44.273 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:44.273 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:44.273 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.273 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:44.273 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.273 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:44.531 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:44.531 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:44.531 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:44.531 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.531 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.531 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:44.531 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:44.531 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.531 13:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62889 00:13:44.531 13:32:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62889 ']' 00:13:44.531 13:32:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62889 00:13:44.531 13:32:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:13:44.531 13:32:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.531 13:32:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62889 00:13:44.531 killing process with pid 62889 00:13:44.531 13:32:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:44.531 13:32:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:44.531 13:32:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62889' 00:13:44.531 13:32:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62889 00:13:44.531 13:32:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62889 00:13:45.908 ************************************ 00:13:45.908 END TEST bdev_nbd 00:13:45.908 ************************************ 00:13:45.908 13:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:45.908 00:13:45.908 real 0m12.769s 00:13:45.908 user 0m16.489s 00:13:45.908 sys 0m5.378s 00:13:45.908 13:32:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.908 13:32:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:45.908 13:32:57 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:13:45.908 13:32:57 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:13:45.908 13:32:57 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:13:45.908 skipping fio tests on NVMe due to multi-ns failures. 00:13:45.908 13:32:57 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:13:45.908 13:32:57 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:45.908 13:32:57 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:45.908 13:32:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:45.908 13:32:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.908 13:32:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:45.908 ************************************ 00:13:45.908 START TEST bdev_verify 00:13:45.908 ************************************ 00:13:45.908 13:32:57 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:45.908 [2024-11-20 13:32:57.795649] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:13:45.908 [2024-11-20 13:32:57.795772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63317 ] 00:13:46.167 [2024-11-20 13:32:57.976406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:46.167 [2024-11-20 13:32:58.096730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.167 [2024-11-20 13:32:58.096779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.102 Running I/O for 5 seconds... 00:13:49.437 20416.00 IOPS, 79.75 MiB/s [2024-11-20T13:33:02.330Z] 20736.00 IOPS, 81.00 MiB/s [2024-11-20T13:33:03.266Z] 20458.67 IOPS, 79.92 MiB/s [2024-11-20T13:33:04.204Z] 20640.00 IOPS, 80.62 MiB/s [2024-11-20T13:33:04.204Z] 20672.00 IOPS, 80.75 MiB/s 00:13:52.247 Latency(us) 00:13:52.247 [2024-11-20T13:33:04.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.247 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:52.247 Verification LBA range: start 0x0 length 0xbd0bd 00:13:52.247 Nvme0n1 : 5.07 1451.64 5.67 0.00 0.00 87732.34 13475.68 75379.56 00:13:52.247 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:52.247 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:52.247 Nvme0n1 : 5.07 1450.46 5.67 0.00 0.00 87733.52 17792.10 76221.79 00:13:52.247 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:52.247 Verification LBA range: start 0x0 length 0x4ff80 00:13:52.247 Nvme1n1p1 : 5.07 1451.14 5.67 0.00 0.00 87634.49 13107.20 71589.53 00:13:52.247 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:52.247 Verification LBA range: start 0x4ff80 length 0x4ff80 00:13:52.247 Nvme1n1p1 : 5.09 1458.10 5.70 0.00 0.00 87445.13 13475.68 72431.76 00:13:52.247 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:52.247 Verification LBA range: start 0x0 length 0x4ff7f 00:13:52.247 Nvme1n1p2 : 5.09 1458.58 5.70 0.00 0.00 87309.66 14633.74 72431.76 00:13:52.247 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:52.247 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:13:52.247 Nvme1n1p2 : 5.09 1457.18 5.69 0.00 0.00 87332.69 14212.63 72852.87 00:13:52.247 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:52.247 Verification LBA range: start 0x0 length 0x80000 00:13:52.247 Nvme2n1 : 5.09 1458.21 5.70 0.00 0.00 87191.47 15054.86 71589.53 00:13:52.247 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:52.247 Verification LBA range: start 0x80000 length 0x80000 00:13:52.247 Nvme2n1 : 5.10 1456.77 5.69 0.00 0.00 87204.31 14423.18 73273.99 00:13:52.247 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:52.247 Verification LBA range: start 0x0 length 0x80000 00:13:52.247 Nvme2n2 : 5.09 1457.70 5.69 0.00 0.00 87063.98 15265.41 72010.64 00:13:52.247 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:52.247 Verification LBA range: start 0x80000 length 0x80000 00:13:52.247 Nvme2n2 : 5.10 1456.30 5.69 0.00 0.00 87069.33 14633.74 74537.33 00:13:52.247 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:52.247 Verification LBA range: start 0x0 length 0x80000 00:13:52.247 Nvme2n3 : 5.09 1457.37 5.69 0.00 0.00 86926.26 14949.58 71168.41 00:13:52.247 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:52.247 Verification LBA range: start 0x80000 length 0x80000 00:13:52.247 Nvme2n3 : 5.10 1455.98 5.69 0.00 0.00 86935.59 14528.46 75800.67 00:13:52.247 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:52.247 Verification LBA range: start 0x0 length 0x20000 00:13:52.247 Nvme3n1 : 5.10 1456.90 5.69 0.00 0.00 86789.88 13054.56 72431.76 00:13:52.247 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:52.247 Verification LBA range: start 0x20000 length 0x20000 00:13:52.247 Nvme3n1 : 5.10 1455.66 5.69 0.00 0.00 86805.97 10264.67 76642.90 00:13:52.247 [2024-11-20T13:33:04.204Z] =================================================================================================================== 00:13:52.247 [2024-11-20T13:33:04.204Z] Total : 20381.98 79.62 0.00 0.00 87225.88 10264.67 76642.90 00:13:53.626 00:13:53.626 real 0m7.726s 00:13:53.626 user 0m14.266s 00:13:53.626 sys 0m0.325s 00:13:53.626 13:33:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:53.626 13:33:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:53.626 ************************************ 00:13:53.626 END TEST bdev_verify 00:13:53.626 ************************************ 00:13:53.626 13:33:05 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:53.626 13:33:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:53.626 13:33:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:53.626 13:33:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:53.626 ************************************ 00:13:53.626 START TEST bdev_verify_big_io 00:13:53.626 ************************************ 00:13:53.626 13:33:05 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:53.886 [2024-11-20 13:33:05.603534] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:13:53.886 [2024-11-20 13:33:05.603691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63421 ] 00:13:53.886 [2024-11-20 13:33:05.795217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:54.145 [2024-11-20 13:33:05.962524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.146 [2024-11-20 13:33:05.962557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.086 Running I/O for 5 seconds... 00:14:00.277 2497.00 IOPS, 156.06 MiB/s [2024-11-20T13:33:12.801Z] 3245.50 IOPS, 202.84 MiB/s [2024-11-20T13:33:12.801Z] 3800.67 IOPS, 237.54 MiB/s 00:14:00.844 Latency(us) 00:14:00.844 [2024-11-20T13:33:12.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.844 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:00.844 Verification LBA range: start 0x0 length 0xbd0b 00:14:00.844 Nvme0n1 : 5.67 143.20 8.95 0.00 0.00 865159.02 22424.37 902870.26 00:14:00.844 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:00.844 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:00.844 Nvme0n1 : 5.67 132.07 8.25 0.00 0.00 923269.94 15054.86 1293664.85 00:14:00.844 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:00.844 Verification LBA range: start 0x0 length 0x4ff8 00:14:00.844 Nvme1n1p1 : 5.73 134.10 8.38 0.00 0.00 901816.37 67378.38 1179121.61 00:14:00.844 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:00.844 Verification LBA range: start 0x4ff8 length 0x4ff8 00:14:00.844 Nvme1n1p1 : 5.74 137.58 8.60 0.00 0.00 876190.77 29688.60 1313878.36 00:14:00.844 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:00.844 Verification LBA range: start 0x0 length 0x4ff7 00:14:00.844 Nvme1n1p2 : 5.73 134.03 8.38 0.00 0.00 880551.72 125492.23 1206072.96 00:14:00.844 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:00.844 Verification LBA range: start 0x4ff7 length 0x4ff7 00:14:00.844 Nvme1n1p2 : 5.74 137.52 8.59 0.00 0.00 855223.24 44217.06 1334091.87 00:14:00.844 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:00.844 Verification LBA range: start 0x0 length 0x8000 00:14:00.844 Nvme2n1 : 5.73 148.30 9.27 0.00 0.00 784878.09 54744.93 1010675.66 00:14:00.844 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:00.844 Verification LBA range: start 0x8000 length 0x8000 00:14:00.844 Nvme2n1 : 5.81 142.06 8.88 0.00 0.00 812494.93 58534.97 1347567.55 00:14:00.844 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:00.844 Verification LBA range: start 0x0 length 0x8000 00:14:00.844 Nvme2n2 : 5.76 155.46 9.72 0.00 0.00 736906.12 29056.93 795064.85 00:14:00.844 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:00.844 Verification LBA range: start 0x8000 length 0x8000 00:14:00.844 Nvme2n2 : 5.81 144.77 9.05 0.00 0.00 781586.68 60640.54 1367781.06 00:14:00.844 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:00.844 Verification LBA range: start 0x0 length 0x8000 00:14:00.844 Nvme2n3 : 5.81 159.79 9.99 0.00 0.00 699451.95 41900.93 832122.96 00:14:00.844 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:00.844 Verification LBA range: start 0x8000 length 0x8000 00:14:00.845 Nvme2n3 : 5.85 156.41 9.78 0.00 0.00 709853.87 11685.94 1387994.58 00:14:00.845 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:00.845 Verification LBA range: start 0x0 length 0x2000 00:14:00.845 Nvme3n1 : 5.83 171.52 10.72 0.00 0.00 638069.21 3790.03 832122.96 00:14:00.845 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:00.845 Verification LBA range: start 0x2000 length 0x2000 00:14:00.845 Nvme3n1 : 5.86 167.31 10.46 0.00 0.00 648940.76 3816.35 1266713.50 00:14:00.845 [2024-11-20T13:33:12.802Z] =================================================================================================================== 00:14:00.845 [2024-11-20T13:33:12.802Z] Total : 2064.12 129.01 0.00 0.00 785619.86 3790.03 1387994.58 00:14:02.750 00:14:02.750 real 0m9.166s 00:14:02.750 user 0m17.051s 00:14:02.750 sys 0m0.351s 00:14:02.750 13:33:14 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.750 13:33:14 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.750 ************************************ 00:14:02.750 END TEST bdev_verify_big_io 00:14:02.750 ************************************ 00:14:03.009 13:33:14 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:03.009 13:33:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:03.009 13:33:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.009 13:33:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:03.009 ************************************ 00:14:03.009 START TEST bdev_write_zeroes 00:14:03.009 ************************************ 00:14:03.009 13:33:14 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:03.009 [2024-11-20 13:33:14.841069] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:14:03.009 [2024-11-20 13:33:14.841205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63538 ] 00:14:03.267 [2024-11-20 13:33:15.025577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.267 [2024-11-20 13:33:15.148625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.202 Running I/O for 1 seconds... 00:14:05.163 66752.00 IOPS, 260.75 MiB/s 00:14:05.163 Latency(us) 00:14:05.163 [2024-11-20T13:33:17.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.163 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:05.163 Nvme0n1 : 1.02 9510.05 37.15 0.00 0.00 13430.39 11685.94 32425.84 00:14:05.163 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:05.163 Nvme1n1p1 : 1.02 9498.63 37.10 0.00 0.00 13426.27 11843.86 32636.40 00:14:05.163 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:05.163 Nvme1n1p2 : 1.03 9488.29 37.06 0.00 0.00 13376.80 11528.02 29688.60 00:14:05.163 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:05.163 Nvme2n1 : 1.03 9479.78 37.03 0.00 0.00 13336.16 11528.02 28004.14 00:14:05.163 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:05.163 Nvme2n2 : 1.03 9471.29 37.00 0.00 0.00 13304.03 11422.74 25056.33 00:14:05.163 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:05.163 Nvme2n3 : 1.03 9462.84 36.96 0.00 0.00 13273.03 10685.79 24214.10 00:14:05.163 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:05.163 Nvme3n1 : 1.03 9512.96 37.16 0.00 0.00 13204.64 6658.88 23371.87 00:14:05.163 [2024-11-20T13:33:17.120Z] =================================================================================================================== 00:14:05.163 [2024-11-20T13:33:17.120Z] Total : 66423.83 259.47 0.00 0.00 13335.78 6658.88 32636.40 00:14:06.541 00:14:06.541 real 0m3.321s 00:14:06.541 user 0m2.937s 00:14:06.541 sys 0m0.268s 00:14:06.541 13:33:18 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.541 13:33:18 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:06.541 ************************************ 00:14:06.541 END TEST bdev_write_zeroes 00:14:06.541 ************************************ 00:14:06.541 13:33:18 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:06.541 13:33:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:06.541 13:33:18 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.541 13:33:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:06.541 ************************************ 00:14:06.541 START TEST bdev_json_nonenclosed 00:14:06.541 ************************************ 00:14:06.541 13:33:18 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:06.541 [2024-11-20 13:33:18.243406] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:14:06.541 [2024-11-20 13:33:18.243549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63597 ] 00:14:06.541 [2024-11-20 13:33:18.428957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.800 [2024-11-20 13:33:18.546787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.800 [2024-11-20 13:33:18.546895] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:06.800 [2024-11-20 13:33:18.546917] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:06.800 [2024-11-20 13:33:18.546930] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:07.060 00:14:07.060 real 0m0.666s 00:14:07.060 user 0m0.414s 00:14:07.060 sys 0m0.147s 00:14:07.060 13:33:18 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.060 13:33:18 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:07.060 ************************************ 00:14:07.060 END TEST bdev_json_nonenclosed 00:14:07.060 ************************************ 00:14:07.060 13:33:18 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:07.060 13:33:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:07.060 13:33:18 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.060 13:33:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:07.060 ************************************ 00:14:07.060 START TEST bdev_json_nonarray 00:14:07.060 ************************************ 00:14:07.060 13:33:18 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:07.060 [2024-11-20 13:33:18.975770] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:14:07.060 [2024-11-20 13:33:18.975915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63619 ] 00:14:07.319 [2024-11-20 13:33:19.159771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.578 [2024-11-20 13:33:19.277394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.578 [2024-11-20 13:33:19.277505] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:07.578 [2024-11-20 13:33:19.277527] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:07.578 [2024-11-20 13:33:19.277540] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:07.837 00:14:07.837 real 0m0.661s 00:14:07.837 user 0m0.427s 00:14:07.837 sys 0m0.128s 00:14:07.837 13:33:19 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.837 13:33:19 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:07.837 ************************************ 00:14:07.837 END TEST bdev_json_nonarray 00:14:07.837 ************************************ 00:14:07.837 13:33:19 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:14:07.837 13:33:19 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:14:07.837 13:33:19 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:14:07.837 13:33:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:07.837 13:33:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.837 13:33:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:07.838 ************************************ 00:14:07.838 START TEST bdev_gpt_uuid 00:14:07.838 ************************************ 00:14:07.838 13:33:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:14:07.838 13:33:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:14:07.838 13:33:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:14:07.838 13:33:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63648 00:14:07.838 13:33:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:07.838 13:33:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:07.838 13:33:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63648 00:14:07.838 13:33:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63648 ']' 00:14:07.838 13:33:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.838 13:33:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.838 13:33:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.838 13:33:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.838 13:33:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:07.838 [2024-11-20 13:33:19.727707] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:14:07.838 [2024-11-20 13:33:19.728339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63648 ] 00:14:08.096 [2024-11-20 13:33:19.911458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.096 [2024-11-20 13:33:20.030448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.032 13:33:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.032 13:33:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:14:09.032 13:33:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:09.032 13:33:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.032 13:33:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:09.291 Some configs were skipped because the RPC state that can call them passed over. 00:14:09.291 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.291 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:14:09.291 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.291 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:09.291 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.291 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:14:09.291 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.291 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:14:09.606 { 00:14:09.606 "name": "Nvme1n1p1", 00:14:09.606 "aliases": [ 00:14:09.606 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:14:09.606 ], 00:14:09.606 "product_name": "GPT Disk", 00:14:09.606 "block_size": 4096, 00:14:09.606 "num_blocks": 655104, 00:14:09.606 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:14:09.606 "assigned_rate_limits": { 00:14:09.606 "rw_ios_per_sec": 0, 00:14:09.606 "rw_mbytes_per_sec": 0, 00:14:09.606 "r_mbytes_per_sec": 0, 00:14:09.606 "w_mbytes_per_sec": 0 00:14:09.606 }, 00:14:09.606 "claimed": false, 00:14:09.606 "zoned": false, 00:14:09.606 "supported_io_types": { 00:14:09.606 "read": true, 00:14:09.606 "write": true, 00:14:09.606 "unmap": true, 00:14:09.606 "flush": true, 00:14:09.606 "reset": true, 00:14:09.606 "nvme_admin": false, 00:14:09.606 "nvme_io": false, 00:14:09.606 "nvme_io_md": false, 00:14:09.606 "write_zeroes": true, 00:14:09.606 "zcopy": false, 00:14:09.606 "get_zone_info": false, 00:14:09.606 "zone_management": false, 00:14:09.606 "zone_append": false, 00:14:09.606 "compare": true, 00:14:09.606 "compare_and_write": false, 00:14:09.606 "abort": true, 00:14:09.606 "seek_hole": false, 00:14:09.606 "seek_data": false, 00:14:09.606 "copy": true, 00:14:09.606 "nvme_iov_md": false 00:14:09.606 }, 00:14:09.606 "driver_specific": { 00:14:09.606 "gpt": { 00:14:09.606 "base_bdev": "Nvme1n1", 00:14:09.606 "offset_blocks": 256, 00:14:09.606 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:14:09.606 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:14:09.606 "partition_name": "SPDK_TEST_first" 00:14:09.606 } 00:14:09.606 } 00:14:09.606 } 00:14:09.606 ]' 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:14:09.606 { 00:14:09.606 "name": "Nvme1n1p2", 00:14:09.606 "aliases": [ 00:14:09.606 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:14:09.606 ], 00:14:09.606 "product_name": "GPT Disk", 00:14:09.606 "block_size": 4096, 00:14:09.606 "num_blocks": 655103, 00:14:09.606 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:14:09.606 "assigned_rate_limits": { 00:14:09.606 "rw_ios_per_sec": 0, 00:14:09.606 "rw_mbytes_per_sec": 0, 00:14:09.606 "r_mbytes_per_sec": 0, 00:14:09.606 "w_mbytes_per_sec": 0 00:14:09.606 }, 00:14:09.606 "claimed": false, 00:14:09.606 "zoned": false, 00:14:09.606 "supported_io_types": { 00:14:09.606 "read": true, 00:14:09.606 "write": true, 00:14:09.606 "unmap": true, 00:14:09.606 "flush": true, 00:14:09.606 "reset": true, 00:14:09.606 "nvme_admin": false, 00:14:09.606 "nvme_io": false, 00:14:09.606 "nvme_io_md": false, 00:14:09.606 "write_zeroes": true, 00:14:09.606 "zcopy": false, 00:14:09.606 "get_zone_info": false, 00:14:09.606 "zone_management": false, 00:14:09.606 "zone_append": false, 00:14:09.606 "compare": true, 00:14:09.606 "compare_and_write": false, 00:14:09.606 "abort": true, 00:14:09.606 "seek_hole": false, 00:14:09.606 "seek_data": false, 00:14:09.606 "copy": true, 00:14:09.606 "nvme_iov_md": false 00:14:09.606 }, 00:14:09.606 "driver_specific": { 00:14:09.606 "gpt": { 00:14:09.606 "base_bdev": "Nvme1n1", 00:14:09.606 "offset_blocks": 655360, 00:14:09.606 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:14:09.606 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:14:09.606 "partition_name": "SPDK_TEST_second" 00:14:09.606 } 00:14:09.606 } 00:14:09.606 } 00:14:09.606 ]' 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63648 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63648 ']' 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63648 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.606 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63648 00:14:09.879 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.879 killing process with pid 63648 00:14:09.879 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.879 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63648' 00:14:09.879 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63648 00:14:09.879 13:33:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63648 00:14:12.412 00:14:12.412 real 0m4.402s 00:14:12.412 user 0m4.470s 00:14:12.412 sys 0m0.555s 00:14:12.412 13:33:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.412 13:33:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:14:12.412 ************************************ 00:14:12.412 END TEST bdev_gpt_uuid 00:14:12.412 ************************************ 00:14:12.412 13:33:24 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:14:12.412 13:33:24 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:14:12.412 13:33:24 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:14:12.412 13:33:24 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:12.412 13:33:24 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:12.412 13:33:24 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:14:12.412 13:33:24 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:14:12.412 13:33:24 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:14:12.412 13:33:24 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:12.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:12.929 Waiting for block devices as requested 00:14:12.929 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:13.189 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:13.189 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:13.448 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:18.718 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:18.718 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:14:18.718 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:14:18.718 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:14:18.718 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:14:18.718 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:14:18.718 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:14:18.718 13:33:30 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:14:18.718 00:14:18.718 real 1m6.294s 00:14:18.718 user 1m22.663s 00:14:18.718 sys 0m12.310s 00:14:18.718 13:33:30 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.718 13:33:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:18.718 ************************************ 00:14:18.718 END TEST blockdev_nvme_gpt 00:14:18.718 ************************************ 00:14:18.718 13:33:30 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:14:18.718 13:33:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:18.718 13:33:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.718 13:33:30 -- common/autotest_common.sh@10 -- # set +x 00:14:18.718 ************************************ 00:14:18.718 START TEST nvme 00:14:18.718 ************************************ 00:14:18.718 13:33:30 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:14:18.989 * Looking for test storage... 00:14:18.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:18.989 13:33:30 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:18.989 13:33:30 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:14:18.989 13:33:30 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:18.989 13:33:30 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:18.990 13:33:30 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:18.990 13:33:30 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:18.990 13:33:30 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:18.990 13:33:30 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:18.990 13:33:30 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:18.990 13:33:30 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:18.990 13:33:30 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:18.990 13:33:30 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:18.990 13:33:30 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:18.990 13:33:30 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:18.990 13:33:30 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:18.990 13:33:30 nvme -- scripts/common.sh@344 -- # case "$op" in 00:14:18.990 13:33:30 nvme -- scripts/common.sh@345 -- # : 1 00:14:18.990 13:33:30 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:18.990 13:33:30 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:18.990 13:33:30 nvme -- scripts/common.sh@365 -- # decimal 1 00:14:18.990 13:33:30 nvme -- scripts/common.sh@353 -- # local d=1 00:14:18.990 13:33:30 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:18.990 13:33:30 nvme -- scripts/common.sh@355 -- # echo 1 00:14:18.990 13:33:30 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:18.990 13:33:30 nvme -- scripts/common.sh@366 -- # decimal 2 00:14:18.990 13:33:30 nvme -- scripts/common.sh@353 -- # local d=2 00:14:18.990 13:33:30 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:18.990 13:33:30 nvme -- scripts/common.sh@355 -- # echo 2 00:14:18.990 13:33:30 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:18.990 13:33:30 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:18.990 13:33:30 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:18.990 13:33:30 nvme -- scripts/common.sh@368 -- # return 0 00:14:18.990 13:33:30 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:18.990 13:33:30 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:18.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.990 --rc genhtml_branch_coverage=1 00:14:18.990 --rc genhtml_function_coverage=1 00:14:18.990 --rc genhtml_legend=1 00:14:18.990 --rc geninfo_all_blocks=1 00:14:18.990 --rc geninfo_unexecuted_blocks=1 00:14:18.990 00:14:18.990 ' 00:14:18.990 13:33:30 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:18.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.990 --rc genhtml_branch_coverage=1 00:14:18.990 --rc genhtml_function_coverage=1 00:14:18.990 --rc genhtml_legend=1 00:14:18.990 --rc geninfo_all_blocks=1 00:14:18.990 --rc geninfo_unexecuted_blocks=1 00:14:18.990 00:14:18.990 ' 00:14:18.990 13:33:30 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:18.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.990 --rc genhtml_branch_coverage=1 00:14:18.990 --rc genhtml_function_coverage=1 00:14:18.990 --rc genhtml_legend=1 00:14:18.990 --rc geninfo_all_blocks=1 00:14:18.990 --rc geninfo_unexecuted_blocks=1 00:14:18.990 00:14:18.990 ' 00:14:18.990 13:33:30 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:18.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.990 --rc genhtml_branch_coverage=1 00:14:18.990 --rc genhtml_function_coverage=1 00:14:18.990 --rc genhtml_legend=1 00:14:18.990 --rc geninfo_all_blocks=1 00:14:18.990 --rc geninfo_unexecuted_blocks=1 00:14:18.990 00:14:18.990 ' 00:14:18.990 13:33:30 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:19.925 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:20.493 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:20.493 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:20.493 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:20.753 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:20.753 13:33:32 nvme -- nvme/nvme.sh@79 -- # uname 00:14:20.753 13:33:32 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:14:20.753 13:33:32 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:14:20.753 13:33:32 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:14:20.753 13:33:32 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:14:20.753 13:33:32 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:14:20.753 13:33:32 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:14:20.753 13:33:32 nvme -- common/autotest_common.sh@1075 -- # stubpid=64312 00:14:20.753 13:33:32 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:14:20.753 Waiting for stub to ready for secondary processes... 00:14:20.753 13:33:32 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:14:20.753 13:33:32 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:14:20.753 13:33:32 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64312 ]] 00:14:20.753 13:33:32 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:14:20.753 [2024-11-20 13:33:32.625959] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:14:20.753 [2024-11-20 13:33:32.626087] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:14:21.690 13:33:33 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:14:21.690 13:33:33 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64312 ]] 00:14:21.690 13:33:33 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:14:21.949 [2024-11-20 13:33:33.647738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:21.949 [2024-11-20 13:33:33.766440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.949 [2024-11-20 13:33:33.766991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.949 [2024-11-20 13:33:33.767019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:21.949 [2024-11-20 13:33:33.785049] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:14:21.949 [2024-11-20 13:33:33.785084] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:21.949 [2024-11-20 13:33:33.802771] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:14:21.949 [2024-11-20 13:33:33.802931] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:14:21.949 [2024-11-20 13:33:33.806358] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:21.949 [2024-11-20 13:33:33.806652] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:14:21.949 [2024-11-20 13:33:33.806755] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:14:21.949 [2024-11-20 13:33:33.809807] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:21.949 [2024-11-20 13:33:33.810035] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:14:21.949 [2024-11-20 13:33:33.810138] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:14:21.949 [2024-11-20 13:33:33.813635] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:21.949 [2024-11-20 13:33:33.813807] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:14:21.949 [2024-11-20 13:33:33.813883] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:14:21.949 [2024-11-20 13:33:33.813938] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:14:21.949 [2024-11-20 13:33:33.813988] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:14:22.885 done. 00:14:22.885 13:33:34 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:14:22.885 13:33:34 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:14:22.885 13:33:34 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:14:22.885 13:33:34 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:14:22.886 13:33:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:22.886 13:33:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:22.886 ************************************ 00:14:22.886 START TEST nvme_reset 00:14:22.886 ************************************ 00:14:22.886 13:33:34 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:14:23.144 Initializing NVMe Controllers 00:14:23.144 Skipping QEMU NVMe SSD at 0000:00:10.0 00:14:23.144 Skipping QEMU NVMe SSD at 0000:00:11.0 00:14:23.144 Skipping QEMU NVMe SSD at 0000:00:13.0 00:14:23.144 Skipping QEMU NVMe SSD at 0000:00:12.0 00:14:23.144 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:14:23.144 00:14:23.144 real 0m0.295s 00:14:23.144 user 0m0.123s 00:14:23.144 sys 0m0.131s 00:14:23.144 13:33:34 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.144 13:33:34 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:14:23.144 ************************************ 00:14:23.144 END TEST nvme_reset 00:14:23.144 ************************************ 00:14:23.144 13:33:34 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:14:23.144 13:33:34 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:23.144 13:33:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.144 13:33:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:23.144 ************************************ 00:14:23.144 START TEST nvme_identify 00:14:23.144 ************************************ 00:14:23.144 13:33:34 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:14:23.144 13:33:34 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:14:23.144 13:33:34 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:14:23.144 13:33:34 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:14:23.144 13:33:34 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:14:23.144 13:33:34 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:23.144 13:33:34 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:14:23.144 13:33:34 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:23.144 13:33:34 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:23.144 13:33:34 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:23.144 13:33:35 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:23.144 13:33:35 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:23.144 13:33:35 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:14:23.406 [2024-11-20 13:33:35.338054] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64346 terminated unexpected 00:14:23.406 ===================================================== 00:14:23.406 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:23.406 ===================================================== 00:14:23.406 Controller Capabilities/Features 00:14:23.406 ================================ 00:14:23.406 Vendor ID: 1b36 00:14:23.406 Subsystem Vendor ID: 1af4 00:14:23.406 Serial Number: 12340 00:14:23.406 Model Number: QEMU NVMe Ctrl 00:14:23.406 Firmware Version: 8.0.0 00:14:23.406 Recommended Arb Burst: 6 00:14:23.406 IEEE OUI Identifier: 00 54 52 00:14:23.406 Multi-path I/O 00:14:23.406 May have multiple subsystem ports: No 00:14:23.406 May have multiple controllers: No 00:14:23.406 Associated with SR-IOV VF: No 00:14:23.406 Max Data Transfer Size: 524288 00:14:23.406 Max Number of Namespaces: 256 00:14:23.406 Max Number of I/O Queues: 64 00:14:23.406 NVMe Specification Version (VS): 1.4 00:14:23.406 NVMe Specification Version (Identify): 1.4 00:14:23.406 Maximum Queue Entries: 2048 00:14:23.406 Contiguous Queues Required: Yes 00:14:23.406 Arbitration Mechanisms Supported 00:14:23.406 Weighted Round Robin: Not Supported 00:14:23.406 Vendor Specific: Not Supported 00:14:23.406 Reset Timeout: 7500 ms 00:14:23.406 Doorbell Stride: 4 bytes 00:14:23.406 NVM Subsystem Reset: Not Supported 00:14:23.406 Command Sets Supported 00:14:23.406 NVM Command Set: Supported 00:14:23.406 Boot Partition: Not Supported 00:14:23.406 Memory Page Size Minimum: 4096 bytes 00:14:23.406 Memory Page Size Maximum: 65536 bytes 00:14:23.406 Persistent Memory Region: Not Supported 00:14:23.406 Optional Asynchronous Events Supported 00:14:23.407 Namespace Attribute Notices: Supported 00:14:23.407 Firmware Activation Notices: Not Supported 00:14:23.407 ANA Change Notices: Not Supported 00:14:23.407 PLE Aggregate Log Change Notices: Not Supported 00:14:23.407 LBA Status Info Alert Notices: Not Supported 00:14:23.407 EGE Aggregate Log Change Notices: Not Supported 00:14:23.407 Normal NVM Subsystem Shutdown event: Not Supported 00:14:23.407 Zone Descriptor Change Notices: Not Supported 00:14:23.407 Discovery Log Change Notices: Not Supported 00:14:23.407 Controller Attributes 00:14:23.407 128-bit Host Identifier: Not Supported 00:14:23.407 Non-Operational Permissive Mode: Not Supported 00:14:23.407 NVM Sets: Not Supported 00:14:23.407 Read Recovery Levels: Not Supported 00:14:23.407 Endurance Groups: Not Supported 00:14:23.407 Predictable Latency Mode: Not Supported 00:14:23.407 Traffic Based Keep ALive: Not Supported 00:14:23.407 Namespace Granularity: Not Supported 00:14:23.407 SQ Associations: Not Supported 00:14:23.407 UUID List: Not Supported 00:14:23.407 Multi-Domain Subsystem: Not Supported 00:14:23.407 Fixed Capacity Management: Not Supported 00:14:23.407 Variable Capacity Management: Not Supported 00:14:23.407 Delete Endurance Group: Not Supported 00:14:23.407 Delete NVM Set: Not Supported 00:14:23.407 Extended LBA Formats Supported: Supported 00:14:23.407 Flexible Data Placement Supported: Not Supported 00:14:23.407 00:14:23.407 Controller Memory Buffer Support 00:14:23.407 ================================ 00:14:23.407 Supported: No 00:14:23.407 00:14:23.407 Persistent Memory Region Support 00:14:23.407 ================================ 00:14:23.407 Supported: No 00:14:23.407 00:14:23.407 Admin Command Set Attributes 00:14:23.407 ============================ 00:14:23.407 Security Send/Receive: Not Supported 00:14:23.407 Format NVM: Supported 00:14:23.407 Firmware Activate/Download: Not Supported 00:14:23.407 Namespace Management: Supported 00:14:23.407 Device Self-Test: Not Supported 00:14:23.407 Directives: Supported 00:14:23.407 NVMe-MI: Not Supported 00:14:23.407 Virtualization Management: Not Supported 00:14:23.407 Doorbell Buffer Config: Supported 00:14:23.407 Get LBA Status Capability: Not Supported 00:14:23.407 Command & Feature Lockdown Capability: Not Supported 00:14:23.407 Abort Command Limit: 4 00:14:23.407 Async Event Request Limit: 4 00:14:23.407 Number of Firmware Slots: N/A 00:14:23.407 Firmware Slot 1 Read-Only: N/A 00:14:23.407 Firmware Activation Without Reset: N/A 00:14:23.407 Multiple Update Detection Support: N/A 00:14:23.407 Firmware Update Granularity: No Information Provided 00:14:23.407 Per-Namespace SMART Log: Yes 00:14:23.407 Asymmetric Namespace Access Log Page: Not Supported 00:14:23.407 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:14:23.407 Command Effects Log Page: Supported 00:14:23.407 Get Log Page Extended Data: Supported 00:14:23.407 Telemetry Log Pages: Not Supported 00:14:23.407 Persistent Event Log Pages: Not Supported 00:14:23.407 Supported Log Pages Log Page: May Support 00:14:23.407 Commands Supported & Effects Log Page: Not Supported 00:14:23.407 Feature Identifiers & Effects Log Page:May Support 00:14:23.407 NVMe-MI Commands & Effects Log Page: May Support 00:14:23.407 Data Area 4 for Telemetry Log: Not Supported 00:14:23.407 Error Log Page Entries Supported: 1 00:14:23.407 Keep Alive: Not Supported 00:14:23.407 00:14:23.407 NVM Command Set Attributes 00:14:23.407 ========================== 00:14:23.407 Submission Queue Entry Size 00:14:23.407 Max: 64 00:14:23.407 Min: 64 00:14:23.407 Completion Queue Entry Size 00:14:23.407 Max: 16 00:14:23.407 Min: 16 00:14:23.407 Number of Namespaces: 256 00:14:23.407 Compare Command: Supported 00:14:23.407 Write Uncorrectable Command: Not Supported 00:14:23.407 Dataset Management Command: Supported 00:14:23.407 Write Zeroes Command: Supported 00:14:23.407 Set Features Save Field: Supported 00:14:23.407 Reservations: Not Supported 00:14:23.407 Timestamp: Supported 00:14:23.407 Copy: Supported 00:14:23.407 Volatile Write Cache: Present 00:14:23.407 Atomic Write Unit (Normal): 1 00:14:23.407 Atomic Write Unit (PFail): 1 00:14:23.407 Atomic Compare & Write Unit: 1 00:14:23.407 Fused Compare & Write: Not Supported 00:14:23.407 Scatter-Gather List 00:14:23.407 SGL Command Set: Supported 00:14:23.407 SGL Keyed: Not Supported 00:14:23.407 SGL Bit Bucket Descriptor: Not Supported 00:14:23.407 SGL Metadata Pointer: Not Supported 00:14:23.407 Oversized SGL: Not Supported 00:14:23.407 SGL Metadata Address: Not Supported 00:14:23.407 SGL Offset: Not Supported 00:14:23.407 Transport SGL Data Block: Not Supported 00:14:23.407 Replay Protected Memory Block: Not Supported 00:14:23.407 00:14:23.407 Firmware Slot Information 00:14:23.407 ========================= 00:14:23.407 Active slot: 1 00:14:23.407 Slot 1 Firmware Revision: 1.0 00:14:23.407 00:14:23.407 00:14:23.407 Commands Supported and Effects 00:14:23.407 ============================== 00:14:23.407 Admin Commands 00:14:23.407 -------------- 00:14:23.407 Delete I/O Submission Queue (00h): Supported 00:14:23.407 Create I/O Submission Queue (01h): Supported 00:14:23.407 Get Log Page (02h): Supported 00:14:23.407 Delete I/O Completion Queue (04h): Supported 00:14:23.407 Create I/O Completion Queue (05h): Supported 00:14:23.407 Identify (06h): Supported 00:14:23.407 Abort (08h): Supported 00:14:23.407 Set Features (09h): Supported 00:14:23.407 Get Features (0Ah): Supported 00:14:23.407 Asynchronous Event Request (0Ch): Supported 00:14:23.407 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:23.407 Directive Send (19h): Supported 00:14:23.407 Directive Receive (1Ah): Supported 00:14:23.407 Virtualization Management (1Ch): Supported 00:14:23.407 Doorbell Buffer Config (7Ch): Supported 00:14:23.407 Format NVM (80h): Supported LBA-Change 00:14:23.407 I/O Commands 00:14:23.407 ------------ 00:14:23.407 Flush (00h): Supported LBA-Change 00:14:23.407 Write (01h): Supported LBA-Change 00:14:23.407 Read (02h): Supported 00:14:23.407 Compare (05h): Supported 00:14:23.407 Write Zeroes (08h): Supported LBA-Change 00:14:23.407 Dataset Management (09h): Supported LBA-Change 00:14:23.407 Unknown (0Ch): Supported 00:14:23.407 Unknown (12h): Supported 00:14:23.407 Copy (19h): Supported LBA-Change 00:14:23.407 Unknown (1Dh): Supported LBA-Change 00:14:23.407 00:14:23.407 Error Log 00:14:23.407 ========= 00:14:23.407 00:14:23.407 Arbitration 00:14:23.407 =========== 00:14:23.407 Arbitration Burst: no limit 00:14:23.407 00:14:23.407 Power Management 00:14:23.407 ================ 00:14:23.407 Number of Power States: 1 00:14:23.407 Current Power State: Power State #0 00:14:23.407 Power State #0: 00:14:23.407 Max Power: 25.00 W 00:14:23.407 Non-Operational State: Operational 00:14:23.407 Entry Latency: 16 microseconds 00:14:23.407 Exit Latency: 4 microseconds 00:14:23.407 Relative Read Throughput: 0 00:14:23.407 Relative Read Latency: 0 00:14:23.407 Relative Write Throughput: 0 00:14:23.407 Relative Write Latency: 0 00:14:23.407 Idle Power[2024-11-20 13:33:35.339663] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64346 terminated unexpected 00:14:23.407 : Not Reported 00:14:23.407 Active Power: Not Reported 00:14:23.407 Non-Operational Permissive Mode: Not Supported 00:14:23.407 00:14:23.407 Health Information 00:14:23.407 ================== 00:14:23.407 Critical Warnings: 00:14:23.407 Available Spare Space: OK 00:14:23.407 Temperature: OK 00:14:23.407 Device Reliability: OK 00:14:23.407 Read Only: No 00:14:23.407 Volatile Memory Backup: OK 00:14:23.407 Current Temperature: 323 Kelvin (50 Celsius) 00:14:23.407 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:23.407 Available Spare: 0% 00:14:23.407 Available Spare Threshold: 0% 00:14:23.407 Life Percentage Used: 0% 00:14:23.407 Data Units Read: 746 00:14:23.407 Data Units Written: 674 00:14:23.407 Host Read Commands: 33803 00:14:23.407 Host Write Commands: 33589 00:14:23.407 Controller Busy Time: 0 minutes 00:14:23.407 Power Cycles: 0 00:14:23.407 Power On Hours: 0 hours 00:14:23.407 Unsafe Shutdowns: 0 00:14:23.407 Unrecoverable Media Errors: 0 00:14:23.407 Lifetime Error Log Entries: 0 00:14:23.407 Warning Temperature Time: 0 minutes 00:14:23.407 Critical Temperature Time: 0 minutes 00:14:23.407 00:14:23.407 Number of Queues 00:14:23.407 ================ 00:14:23.407 Number of I/O Submission Queues: 64 00:14:23.407 Number of I/O Completion Queues: 64 00:14:23.407 00:14:23.407 ZNS Specific Controller Data 00:14:23.407 ============================ 00:14:23.408 Zone Append Size Limit: 0 00:14:23.408 00:14:23.408 00:14:23.408 Active Namespaces 00:14:23.408 ================= 00:14:23.408 Namespace ID:1 00:14:23.408 Error Recovery Timeout: Unlimited 00:14:23.408 Command Set Identifier: NVM (00h) 00:14:23.408 Deallocate: Supported 00:14:23.408 Deallocated/Unwritten Error: Supported 00:14:23.408 Deallocated Read Value: All 0x00 00:14:23.408 Deallocate in Write Zeroes: Not Supported 00:14:23.408 Deallocated Guard Field: 0xFFFF 00:14:23.408 Flush: Supported 00:14:23.408 Reservation: Not Supported 00:14:23.408 Metadata Transferred as: Separate Metadata Buffer 00:14:23.408 Namespace Sharing Capabilities: Private 00:14:23.408 Size (in LBAs): 1548666 (5GiB) 00:14:23.408 Capacity (in LBAs): 1548666 (5GiB) 00:14:23.408 Utilization (in LBAs): 1548666 (5GiB) 00:14:23.408 Thin Provisioning: Not Supported 00:14:23.408 Per-NS Atomic Units: No 00:14:23.408 Maximum Single Source Range Length: 128 00:14:23.408 Maximum Copy Length: 128 00:14:23.408 Maximum Source Range Count: 128 00:14:23.408 NGUID/EUI64 Never Reused: No 00:14:23.408 Namespace Write Protected: No 00:14:23.408 Number of LBA Formats: 8 00:14:23.408 Current LBA Format: LBA Format #07 00:14:23.408 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:23.408 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:23.408 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:23.408 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:23.408 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:23.408 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:23.408 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:23.408 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:23.408 00:14:23.408 NVM Specific Namespace Data 00:14:23.408 =========================== 00:14:23.408 Logical Block Storage Tag Mask: 0 00:14:23.408 Protection Information Capabilities: 00:14:23.408 16b Guard Protection Information Storage Tag Support: No 00:14:23.408 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:23.408 Storage Tag Check Read Support: No 00:14:23.408 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.408 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.408 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.408 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.408 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.408 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.408 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.408 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.408 ===================================================== 00:14:23.408 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:23.408 ===================================================== 00:14:23.408 Controller Capabilities/Features 00:14:23.408 ================================ 00:14:23.408 Vendor ID: 1b36 00:14:23.408 Subsystem Vendor ID: 1af4 00:14:23.408 Serial Number: 12341 00:14:23.408 Model Number: QEMU NVMe Ctrl 00:14:23.408 Firmware Version: 8.0.0 00:14:23.408 Recommended Arb Burst: 6 00:14:23.408 IEEE OUI Identifier: 00 54 52 00:14:23.408 Multi-path I/O 00:14:23.408 May have multiple subsystem ports: No 00:14:23.408 May have multiple controllers: No 00:14:23.408 Associated with SR-IOV VF: No 00:14:23.408 Max Data Transfer Size: 524288 00:14:23.408 Max Number of Namespaces: 256 00:14:23.408 Max Number of I/O Queues: 64 00:14:23.408 NVMe Specification Version (VS): 1.4 00:14:23.408 NVMe Specification Version (Identify): 1.4 00:14:23.408 Maximum Queue Entries: 2048 00:14:23.408 Contiguous Queues Required: Yes 00:14:23.408 Arbitration Mechanisms Supported 00:14:23.408 Weighted Round Robin: Not Supported 00:14:23.408 Vendor Specific: Not Supported 00:14:23.408 Reset Timeout: 7500 ms 00:14:23.408 Doorbell Stride: 4 bytes 00:14:23.408 NVM Subsystem Reset: Not Supported 00:14:23.408 Command Sets Supported 00:14:23.408 NVM Command Set: Supported 00:14:23.408 Boot Partition: Not Supported 00:14:23.408 Memory Page Size Minimum: 4096 bytes 00:14:23.408 Memory Page Size Maximum: 65536 bytes 00:14:23.408 Persistent Memory Region: Not Supported 00:14:23.408 Optional Asynchronous Events Supported 00:14:23.408 Namespace Attribute Notices: Supported 00:14:23.408 Firmware Activation Notices: Not Supported 00:14:23.408 ANA Change Notices: Not Supported 00:14:23.408 PLE Aggregate Log Change Notices: Not Supported 00:14:23.408 LBA Status Info Alert Notices: Not Supported 00:14:23.408 EGE Aggregate Log Change Notices: Not Supported 00:14:23.408 Normal NVM Subsystem Shutdown event: Not Supported 00:14:23.408 Zone Descriptor Change Notices: Not Supported 00:14:23.408 Discovery Log Change Notices: Not Supported 00:14:23.408 Controller Attributes 00:14:23.408 128-bit Host Identifier: Not Supported 00:14:23.408 Non-Operational Permissive Mode: Not Supported 00:14:23.408 NVM Sets: Not Supported 00:14:23.408 Read Recovery Levels: Not Supported 00:14:23.408 Endurance Groups: Not Supported 00:14:23.408 Predictable Latency Mode: Not Supported 00:14:23.408 Traffic Based Keep ALive: Not Supported 00:14:23.408 Namespace Granularity: Not Supported 00:14:23.408 SQ Associations: Not Supported 00:14:23.408 UUID List: Not Supported 00:14:23.408 Multi-Domain Subsystem: Not Supported 00:14:23.408 Fixed Capacity Management: Not Supported 00:14:23.408 Variable Capacity Management: Not Supported 00:14:23.408 Delete Endurance Group: Not Supported 00:14:23.408 Delete NVM Set: Not Supported 00:14:23.408 Extended LBA Formats Supported: Supported 00:14:23.408 Flexible Data Placement Supported: Not Supported 00:14:23.408 00:14:23.408 Controller Memory Buffer Support 00:14:23.408 ================================ 00:14:23.408 Supported: No 00:14:23.408 00:14:23.408 Persistent Memory Region Support 00:14:23.408 ================================ 00:14:23.408 Supported: No 00:14:23.408 00:14:23.408 Admin Command Set Attributes 00:14:23.408 ============================ 00:14:23.408 Security Send/Receive: Not Supported 00:14:23.408 Format NVM: Supported 00:14:23.408 Firmware Activate/Download: Not Supported 00:14:23.408 Namespace Management: Supported 00:14:23.408 Device Self-Test: Not Supported 00:14:23.408 Directives: Supported 00:14:23.408 NVMe-MI: Not Supported 00:14:23.408 Virtualization Management: Not Supported 00:14:23.408 Doorbell Buffer Config: Supported 00:14:23.408 Get LBA Status Capability: Not Supported 00:14:23.408 Command & Feature Lockdown Capability: Not Supported 00:14:23.408 Abort Command Limit: 4 00:14:23.408 Async Event Request Limit: 4 00:14:23.408 Number of Firmware Slots: N/A 00:14:23.408 Firmware Slot 1 Read-Only: N/A 00:14:23.408 Firmware Activation Without Reset: N/A 00:14:23.408 Multiple Update Detection Support: N/A 00:14:23.408 Firmware Update Granularity: No Information Provided 00:14:23.408 Per-Namespace SMART Log: Yes 00:14:23.408 Asymmetric Namespace Access Log Page: Not Supported 00:14:23.408 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:14:23.408 Command Effects Log Page: Supported 00:14:23.408 Get Log Page Extended Data: Supported 00:14:23.408 Telemetry Log Pages: Not Supported 00:14:23.408 Persistent Event Log Pages: Not Supported 00:14:23.408 Supported Log Pages Log Page: May Support 00:14:23.408 Commands Supported & Effects Log Page: Not Supported 00:14:23.408 Feature Identifiers & Effects Log Page:May Support 00:14:23.408 NVMe-MI Commands & Effects Log Page: May Support 00:14:23.408 Data Area 4 for Telemetry Log: Not Supported 00:14:23.408 Error Log Page Entries Supported: 1 00:14:23.408 Keep Alive: Not Supported 00:14:23.408 00:14:23.408 NVM Command Set Attributes 00:14:23.408 ========================== 00:14:23.408 Submission Queue Entry Size 00:14:23.408 Max: 64 00:14:23.408 Min: 64 00:14:23.408 Completion Queue Entry Size 00:14:23.408 Max: 16 00:14:23.408 Min: 16 00:14:23.408 Number of Namespaces: 256 00:14:23.408 Compare Command: Supported 00:14:23.408 Write Uncorrectable Command: Not Supported 00:14:23.408 Dataset Management Command: Supported 00:14:23.408 Write Zeroes Command: Supported 00:14:23.408 Set Features Save Field: Supported 00:14:23.408 Reservations: Not Supported 00:14:23.408 Timestamp: Supported 00:14:23.408 Copy: Supported 00:14:23.409 Volatile Write Cache: Present 00:14:23.409 Atomic Write Unit (Normal): 1 00:14:23.409 Atomic Write Unit (PFail): 1 00:14:23.409 Atomic Compare & Write Unit: 1 00:14:23.409 Fused Compare & Write: Not Supported 00:14:23.409 Scatter-Gather List 00:14:23.409 SGL Command Set: Supported 00:14:23.409 SGL Keyed: Not Supported 00:14:23.409 SGL Bit Bucket Descriptor: Not Supported 00:14:23.409 SGL Metadata Pointer: Not Supported 00:14:23.409 Oversized SGL: Not Supported 00:14:23.409 SGL Metadata Address: Not Supported 00:14:23.409 SGL Offset: Not Supported 00:14:23.409 Transport SGL Data Block: Not Supported 00:14:23.409 Replay Protected Memory Block: Not Supported 00:14:23.409 00:14:23.409 Firmware Slot Information 00:14:23.409 ========================= 00:14:23.409 Active slot: 1 00:14:23.409 Slot 1 Firmware Revision: 1.0 00:14:23.409 00:14:23.409 00:14:23.409 Commands Supported and Effects 00:14:23.409 ============================== 00:14:23.409 Admin Commands 00:14:23.409 -------------- 00:14:23.409 Delete I/O Submission Queue (00h): Supported 00:14:23.409 Create I/O Submission Queue (01h): Supported 00:14:23.409 Get Log Page (02h): Supported 00:14:23.409 Delete I/O Completion Queue (04h): Supported 00:14:23.409 Create I/O Completion Queue (05h): Supported 00:14:23.409 Identify (06h): Supported 00:14:23.409 Abort (08h): Supported 00:14:23.409 Set Features (09h): Supported 00:14:23.409 Get Features (0Ah): Supported 00:14:23.409 Asynchronous Event Request (0Ch): Supported 00:14:23.409 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:23.409 Directive Send (19h): Supported 00:14:23.409 Directive Receive (1Ah): Supported 00:14:23.409 Virtualization Management (1Ch): Supported 00:14:23.409 Doorbell Buffer Config (7Ch): Supported 00:14:23.409 Format NVM (80h): Supported LBA-Change 00:14:23.409 I/O Commands 00:14:23.409 ------------ 00:14:23.409 Flush (00h): Supported LBA-Change 00:14:23.409 Write (01h): Supported LBA-Change 00:14:23.409 Read (02h): Supported 00:14:23.409 Compare (05h): Supported 00:14:23.409 Write Zeroes (08h): Supported LBA-Change 00:14:23.409 Dataset Management (09h): Supported LBA-Change 00:14:23.409 Unknown (0Ch): Supported 00:14:23.409 Unknown (12h): Supported 00:14:23.409 Copy (19h): Supported LBA-Change 00:14:23.409 Unknown (1Dh): Supported LBA-Change 00:14:23.409 00:14:23.409 Error Log 00:14:23.409 ========= 00:14:23.409 00:14:23.409 Arbitration 00:14:23.409 =========== 00:14:23.409 Arbitration Burst: no limit 00:14:23.409 00:14:23.409 Power Management 00:14:23.409 ================ 00:14:23.409 Number of Power States: 1 00:14:23.409 Current Power State: Power State #0 00:14:23.409 Power State #0: 00:14:23.409 Max Power: 25.00 W 00:14:23.409 Non-Operational State: Operational 00:14:23.409 Entry Latency: 16 microseconds 00:14:23.409 Exit Latency: 4 microseconds 00:14:23.409 Relative Read Throughput: 0 00:14:23.409 Relative Read Latency: 0 00:14:23.409 Relative Write Throughput: 0 00:14:23.409 Relative Write Latency: 0 00:14:23.409 Idle Power: Not Reported 00:14:23.409 Active Power: Not Reported 00:14:23.409 Non-Operational Permissive Mode: Not Supported 00:14:23.409 00:14:23.409 Health Information 00:14:23.409 ================== 00:14:23.409 Critical Warnings: 00:14:23.409 Available Spare Space: OK 00:14:23.409 Temperature: [2024-11-20 13:33:35.340858] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64346 terminated unexpected 00:14:23.409 OK 00:14:23.409 Device Reliability: OK 00:14:23.409 Read Only: No 00:14:23.409 Volatile Memory Backup: OK 00:14:23.409 Current Temperature: 323 Kelvin (50 Celsius) 00:14:23.409 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:23.409 Available Spare: 0% 00:14:23.409 Available Spare Threshold: 0% 00:14:23.409 Life Percentage Used: 0% 00:14:23.409 Data Units Read: 1137 00:14:23.409 Data Units Written: 1004 00:14:23.409 Host Read Commands: 51374 00:14:23.409 Host Write Commands: 50155 00:14:23.409 Controller Busy Time: 0 minutes 00:14:23.409 Power Cycles: 0 00:14:23.409 Power On Hours: 0 hours 00:14:23.409 Unsafe Shutdowns: 0 00:14:23.409 Unrecoverable Media Errors: 0 00:14:23.409 Lifetime Error Log Entries: 0 00:14:23.409 Warning Temperature Time: 0 minutes 00:14:23.409 Critical Temperature Time: 0 minutes 00:14:23.409 00:14:23.409 Number of Queues 00:14:23.409 ================ 00:14:23.409 Number of I/O Submission Queues: 64 00:14:23.409 Number of I/O Completion Queues: 64 00:14:23.409 00:14:23.409 ZNS Specific Controller Data 00:14:23.409 ============================ 00:14:23.409 Zone Append Size Limit: 0 00:14:23.409 00:14:23.409 00:14:23.409 Active Namespaces 00:14:23.409 ================= 00:14:23.409 Namespace ID:1 00:14:23.409 Error Recovery Timeout: Unlimited 00:14:23.409 Command Set Identifier: NVM (00h) 00:14:23.409 Deallocate: Supported 00:14:23.409 Deallocated/Unwritten Error: Supported 00:14:23.409 Deallocated Read Value: All 0x00 00:14:23.409 Deallocate in Write Zeroes: Not Supported 00:14:23.409 Deallocated Guard Field: 0xFFFF 00:14:23.409 Flush: Supported 00:14:23.409 Reservation: Not Supported 00:14:23.409 Namespace Sharing Capabilities: Private 00:14:23.409 Size (in LBAs): 1310720 (5GiB) 00:14:23.409 Capacity (in LBAs): 1310720 (5GiB) 00:14:23.409 Utilization (in LBAs): 1310720 (5GiB) 00:14:23.409 Thin Provisioning: Not Supported 00:14:23.409 Per-NS Atomic Units: No 00:14:23.409 Maximum Single Source Range Length: 128 00:14:23.409 Maximum Copy Length: 128 00:14:23.409 Maximum Source Range Count: 128 00:14:23.409 NGUID/EUI64 Never Reused: No 00:14:23.409 Namespace Write Protected: No 00:14:23.409 Number of LBA Formats: 8 00:14:23.409 Current LBA Format: LBA Format #04 00:14:23.409 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:23.409 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:23.409 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:23.409 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:23.409 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:23.409 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:23.409 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:23.409 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:23.409 00:14:23.409 NVM Specific Namespace Data 00:14:23.409 =========================== 00:14:23.409 Logical Block Storage Tag Mask: 0 00:14:23.409 Protection Information Capabilities: 00:14:23.409 16b Guard Protection Information Storage Tag Support: No 00:14:23.409 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:23.409 Storage Tag Check Read Support: No 00:14:23.409 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.409 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.409 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.409 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.409 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.409 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.409 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.409 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.409 ===================================================== 00:14:23.409 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:23.409 ===================================================== 00:14:23.409 Controller Capabilities/Features 00:14:23.409 ================================ 00:14:23.409 Vendor ID: 1b36 00:14:23.409 Subsystem Vendor ID: 1af4 00:14:23.409 Serial Number: 12343 00:14:23.409 Model Number: QEMU NVMe Ctrl 00:14:23.409 Firmware Version: 8.0.0 00:14:23.409 Recommended Arb Burst: 6 00:14:23.409 IEEE OUI Identifier: 00 54 52 00:14:23.409 Multi-path I/O 00:14:23.409 May have multiple subsystem ports: No 00:14:23.409 May have multiple controllers: Yes 00:14:23.409 Associated with SR-IOV VF: No 00:14:23.409 Max Data Transfer Size: 524288 00:14:23.409 Max Number of Namespaces: 256 00:14:23.409 Max Number of I/O Queues: 64 00:14:23.409 NVMe Specification Version (VS): 1.4 00:14:23.409 NVMe Specification Version (Identify): 1.4 00:14:23.409 Maximum Queue Entries: 2048 00:14:23.409 Contiguous Queues Required: Yes 00:14:23.410 Arbitration Mechanisms Supported 00:14:23.410 Weighted Round Robin: Not Supported 00:14:23.410 Vendor Specific: Not Supported 00:14:23.410 Reset Timeout: 7500 ms 00:14:23.410 Doorbell Stride: 4 bytes 00:14:23.410 NVM Subsystem Reset: Not Supported 00:14:23.410 Command Sets Supported 00:14:23.410 NVM Command Set: Supported 00:14:23.410 Boot Partition: Not Supported 00:14:23.410 Memory Page Size Minimum: 4096 bytes 00:14:23.410 Memory Page Size Maximum: 65536 bytes 00:14:23.410 Persistent Memory Region: Not Supported 00:14:23.410 Optional Asynchronous Events Supported 00:14:23.410 Namespace Attribute Notices: Supported 00:14:23.410 Firmware Activation Notices: Not Supported 00:14:23.410 ANA Change Notices: Not Supported 00:14:23.410 PLE Aggregate Log Change Notices: Not Supported 00:14:23.410 LBA Status Info Alert Notices: Not Supported 00:14:23.410 EGE Aggregate Log Change Notices: Not Supported 00:14:23.410 Normal NVM Subsystem Shutdown event: Not Supported 00:14:23.410 Zone Descriptor Change Notices: Not Supported 00:14:23.410 Discovery Log Change Notices: Not Supported 00:14:23.410 Controller Attributes 00:14:23.410 128-bit Host Identifier: Not Supported 00:14:23.410 Non-Operational Permissive Mode: Not Supported 00:14:23.410 NVM Sets: Not Supported 00:14:23.410 Read Recovery Levels: Not Supported 00:14:23.410 Endurance Groups: Supported 00:14:23.410 Predictable Latency Mode: Not Supported 00:14:23.410 Traffic Based Keep ALive: Not Supported 00:14:23.410 Namespace Granularity: Not Supported 00:14:23.410 SQ Associations: Not Supported 00:14:23.410 UUID List: Not Supported 00:14:23.410 Multi-Domain Subsystem: Not Supported 00:14:23.410 Fixed Capacity Management: Not Supported 00:14:23.410 Variable Capacity Management: Not Supported 00:14:23.410 Delete Endurance Group: Not Supported 00:14:23.410 Delete NVM Set: Not Supported 00:14:23.410 Extended LBA Formats Supported: Supported 00:14:23.410 Flexible Data Placement Supported: Supported 00:14:23.410 00:14:23.410 Controller Memory Buffer Support 00:14:23.410 ================================ 00:14:23.410 Supported: No 00:14:23.410 00:14:23.410 Persistent Memory Region Support 00:14:23.410 ================================ 00:14:23.410 Supported: No 00:14:23.410 00:14:23.410 Admin Command Set Attributes 00:14:23.410 ============================ 00:14:23.410 Security Send/Receive: Not Supported 00:14:23.410 Format NVM: Supported 00:14:23.410 Firmware Activate/Download: Not Supported 00:14:23.410 Namespace Management: Supported 00:14:23.410 Device Self-Test: Not Supported 00:14:23.410 Directives: Supported 00:14:23.410 NVMe-MI: Not Supported 00:14:23.410 Virtualization Management: Not Supported 00:14:23.410 Doorbell Buffer Config: Supported 00:14:23.410 Get LBA Status Capability: Not Supported 00:14:23.410 Command & Feature Lockdown Capability: Not Supported 00:14:23.410 Abort Command Limit: 4 00:14:23.410 Async Event Request Limit: 4 00:14:23.410 Number of Firmware Slots: N/A 00:14:23.410 Firmware Slot 1 Read-Only: N/A 00:14:23.410 Firmware Activation Without Reset: N/A 00:14:23.410 Multiple Update Detection Support: N/A 00:14:23.410 Firmware Update Granularity: No Information Provided 00:14:23.410 Per-Namespace SMART Log: Yes 00:14:23.410 Asymmetric Namespace Access Log Page: Not Supported 00:14:23.410 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:14:23.410 Command Effects Log Page: Supported 00:14:23.410 Get Log Page Extended Data: Supported 00:14:23.410 Telemetry Log Pages: Not Supported 00:14:23.410 Persistent Event Log Pages: Not Supported 00:14:23.410 Supported Log Pages Log Page: May Support 00:14:23.410 Commands Supported & Effects Log Page: Not Supported 00:14:23.410 Feature Identifiers & Effects Log Page:May Support 00:14:23.410 NVMe-MI Commands & Effects Log Page: May Support 00:14:23.410 Data Area 4 for Telemetry Log: Not Supported 00:14:23.410 Error Log Page Entries Supported: 1 00:14:23.410 Keep Alive: Not Supported 00:14:23.410 00:14:23.410 NVM Command Set Attributes 00:14:23.410 ========================== 00:14:23.410 Submission Queue Entry Size 00:14:23.410 Max: 64 00:14:23.410 Min: 64 00:14:23.410 Completion Queue Entry Size 00:14:23.410 Max: 16 00:14:23.410 Min: 16 00:14:23.410 Number of Namespaces: 256 00:14:23.410 Compare Command: Supported 00:14:23.410 Write Uncorrectable Command: Not Supported 00:14:23.410 Dataset Management Command: Supported 00:14:23.410 Write Zeroes Command: Supported 00:14:23.410 Set Features Save Field: Supported 00:14:23.410 Reservations: Not Supported 00:14:23.410 Timestamp: Supported 00:14:23.410 Copy: Supported 00:14:23.410 Volatile Write Cache: Present 00:14:23.410 Atomic Write Unit (Normal): 1 00:14:23.410 Atomic Write Unit (PFail): 1 00:14:23.410 Atomic Compare & Write Unit: 1 00:14:23.410 Fused Compare & Write: Not Supported 00:14:23.410 Scatter-Gather List 00:14:23.410 SGL Command Set: Supported 00:14:23.410 SGL Keyed: Not Supported 00:14:23.410 SGL Bit Bucket Descriptor: Not Supported 00:14:23.410 SGL Metadata Pointer: Not Supported 00:14:23.410 Oversized SGL: Not Supported 00:14:23.410 SGL Metadata Address: Not Supported 00:14:23.410 SGL Offset: Not Supported 00:14:23.410 Transport SGL Data Block: Not Supported 00:14:23.410 Replay Protected Memory Block: Not Supported 00:14:23.410 00:14:23.410 Firmware Slot Information 00:14:23.410 ========================= 00:14:23.410 Active slot: 1 00:14:23.410 Slot 1 Firmware Revision: 1.0 00:14:23.410 00:14:23.410 00:14:23.410 Commands Supported and Effects 00:14:23.410 ============================== 00:14:23.410 Admin Commands 00:14:23.410 -------------- 00:14:23.410 Delete I/O Submission Queue (00h): Supported 00:14:23.410 Create I/O Submission Queue (01h): Supported 00:14:23.410 Get Log Page (02h): Supported 00:14:23.410 Delete I/O Completion Queue (04h): Supported 00:14:23.410 Create I/O Completion Queue (05h): Supported 00:14:23.410 Identify (06h): Supported 00:14:23.410 Abort (08h): Supported 00:14:23.410 Set Features (09h): Supported 00:14:23.410 Get Features (0Ah): Supported 00:14:23.410 Asynchronous Event Request (0Ch): Supported 00:14:23.411 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:23.411 Directive Send (19h): Supported 00:14:23.411 Directive Receive (1Ah): Supported 00:14:23.411 Virtualization Management (1Ch): Supported 00:14:23.411 Doorbell Buffer Config (7Ch): Supported 00:14:23.411 Format NVM (80h): Supported LBA-Change 00:14:23.411 I/O Commands 00:14:23.411 ------------ 00:14:23.411 Flush (00h): Supported LBA-Change 00:14:23.411 Write (01h): Supported LBA-Change 00:14:23.411 Read (02h): Supported 00:14:23.411 Compare (05h): Supported 00:14:23.411 Write Zeroes (08h): Supported LBA-Change 00:14:23.411 Dataset Management (09h): Supported LBA-Change 00:14:23.411 Unknown (0Ch): Supported 00:14:23.411 Unknown (12h): Supported 00:14:23.411 Copy (19h): Supported LBA-Change 00:14:23.411 Unknown (1Dh): Supported LBA-Change 00:14:23.411 00:14:23.411 Error Log 00:14:23.411 ========= 00:14:23.411 00:14:23.411 Arbitration 00:14:23.411 =========== 00:14:23.411 Arbitration Burst: no limit 00:14:23.411 00:14:23.411 Power Management 00:14:23.411 ================ 00:14:23.411 Number of Power States: 1 00:14:23.411 Current Power State: Power State #0 00:14:23.411 Power State #0: 00:14:23.411 Max Power: 25.00 W 00:14:23.411 Non-Operational State: Operational 00:14:23.411 Entry Latency: 16 microseconds 00:14:23.411 Exit Latency: 4 microseconds 00:14:23.411 Relative Read Throughput: 0 00:14:23.411 Relative Read Latency: 0 00:14:23.411 Relative Write Throughput: 0 00:14:23.411 Relative Write Latency: 0 00:14:23.411 Idle Power: Not Reported 00:14:23.411 Active Power: Not Reported 00:14:23.411 Non-Operational Permissive Mode: Not Supported 00:14:23.411 00:14:23.411 Health Information 00:14:23.411 ================== 00:14:23.411 Critical Warnings: 00:14:23.411 Available Spare Space: OK 00:14:23.411 Temperature: OK 00:14:23.411 Device Reliability: OK 00:14:23.411 Read Only: No 00:14:23.411 Volatile Memory Backup: OK 00:14:23.411 Current Temperature: 323 Kelvin (50 Celsius) 00:14:23.411 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:23.411 Available Spare: 0% 00:14:23.411 Available Spare Threshold: 0% 00:14:23.411 Life Percentage Used: 0% 00:14:23.411 Data Units Read: 896 00:14:23.411 Data Units Written: 825 00:14:23.411 Host Read Commands: 35276 00:14:23.411 Host Write Commands: 34699 00:14:23.411 Controller Busy Time: 0 minutes 00:14:23.411 Power Cycles: 0 00:14:23.411 Power On Hours: 0 hours 00:14:23.411 Unsafe Shutdowns: 0 00:14:23.411 Unrecoverable Media Errors: 0 00:14:23.411 Lifetime Error Log Entries: 0 00:14:23.411 Warning Temperature Time: 0 minutes 00:14:23.411 Critical Temperature Time: 0 minutes 00:14:23.411 00:14:23.411 Number of Queues 00:14:23.411 ================ 00:14:23.411 Number of I/O Submission Queues: 64 00:14:23.411 Number of I/O Completion Queues: 64 00:14:23.411 00:14:23.411 ZNS Specific Controller Data 00:14:23.411 ============================ 00:14:23.411 Zone Append Size Limit: 0 00:14:23.411 00:14:23.411 00:14:23.411 Active Namespaces 00:14:23.411 ================= 00:14:23.411 Namespace ID:1 00:14:23.411 Error Recovery Timeout: Unlimited 00:14:23.411 Command Set Identifier: NVM (00h) 00:14:23.411 Deallocate: Supported 00:14:23.411 Deallocated/Unwritten Error: Supported 00:14:23.411 Deallocated Read Value: All 0x00 00:14:23.411 Deallocate in Write Zeroes: Not Supported 00:14:23.411 Deallocated Guard Field: 0xFFFF 00:14:23.411 Flush: Supported 00:14:23.411 Reservation: Not Supported 00:14:23.411 Namespace Sharing Capabilities: Multiple Controllers 00:14:23.411 Size (in LBAs): 262144 (1GiB) 00:14:23.411 Capacity (in LBAs): 262144 (1GiB) 00:14:23.411 Utilization (in LBAs): 262144 (1GiB) 00:14:23.411 Thin Provisioning: Not Supported 00:14:23.411 Per-NS Atomic Units: No 00:14:23.411 Maximum Single Source Range Length: 128 00:14:23.411 Maximum Copy Length: 128 00:14:23.411 Maximum Source Range Count: 128 00:14:23.411 NGUID/EUI64 Never Reused: No 00:14:23.411 Namespace Write Protected: No 00:14:23.411 Endurance group ID: 1 00:14:23.411 Number of LBA Formats: 8 00:14:23.411 Current LBA Format: LBA Format #04 00:14:23.411 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:23.411 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:23.411 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:23.411 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:23.411 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:23.411 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:23.411 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:23.411 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:23.411 00:14:23.411 Get Feature FDP: 00:14:23.411 ================ 00:14:23.411 Enabled: Yes 00:14:23.411 FDP configuration index: 0 00:14:23.411 00:14:23.411 FDP configurations log page 00:14:23.411 =========================== 00:14:23.411 Number of FDP configurations: 1 00:14:23.411 Version: 0 00:14:23.411 Size: 112 00:14:23.411 FDP Configuration Descriptor: 0 00:14:23.411 Descriptor Size: 96 00:14:23.411 Reclaim Group Identifier format: 2 00:14:23.411 FDP Volatile Write Cache: Not Present 00:14:23.411 FDP Configuration: Valid 00:14:23.411 Vendor Specific Size: 0 00:14:23.411 Number of Reclaim Groups: 2 00:14:23.411 Number of Recalim Unit Handles: 8 00:14:23.411 Max Placement Identifiers: 128 00:14:23.411 Number of Namespaces Suppprted: 256 00:14:23.411 Reclaim unit Nominal Size: 6000000 bytes 00:14:23.411 Estimated Reclaim Unit Time Limit: Not Reported 00:14:23.411 RUH Desc #000: RUH Type: Initially Isolated 00:14:23.411 RUH Desc #001: RUH Type: Initially Isolated 00:14:23.411 RUH Desc #002: RUH Type: Initially Isolated 00:14:23.411 RUH Desc #003: RUH Type: Initially Isolated 00:14:23.411 RUH Desc #004: RUH Type: Initially Isolated 00:14:23.411 RUH Desc #005: RUH Type: Initially Isolated 00:14:23.411 RUH Desc #006: RUH Type: Initially Isolated 00:14:23.411 RUH Desc #007: RUH Type: Initially Isolated 00:14:23.411 00:14:23.411 FDP reclaim unit handle usage log page 00:14:23.411 ====================================== 00:14:23.411 Number of Reclaim Unit Handles: 8 00:14:23.411 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:14:23.411 RUH Usage Desc #001: RUH Attributes: Unused 00:14:23.411 RUH Usage Desc #002: RUH Attributes: Unused 00:14:23.411 RUH Usage Desc #003: RUH Attributes: Unused 00:14:23.411 RUH Usage Desc #004: RUH Attributes: Unused 00:14:23.411 RUH Usage Desc #005: RUH Attributes: Unused 00:14:23.411 RUH Usage Desc #006: RUH Attributes: Unused 00:14:23.411 RUH Usage Desc #007: RUH Attributes: Unused 00:14:23.411 00:14:23.411 FDP statistics log page 00:14:23.411 ======================= 00:14:23.411 Host bytes with metadata written: 530620416 00:14:23.411 Med[2024-11-20 13:33:35.342596] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64346 terminated unexpected 00:14:23.411 ia bytes with metadata written: 530677760 00:14:23.411 Media bytes erased: 0 00:14:23.411 00:14:23.411 FDP events log page 00:14:23.411 =================== 00:14:23.411 Number of FDP events: 0 00:14:23.411 00:14:23.411 NVM Specific Namespace Data 00:14:23.411 =========================== 00:14:23.411 Logical Block Storage Tag Mask: 0 00:14:23.411 Protection Information Capabilities: 00:14:23.411 16b Guard Protection Information Storage Tag Support: No 00:14:23.411 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:23.411 Storage Tag Check Read Support: No 00:14:23.411 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.411 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.411 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.411 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.411 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.411 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.411 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.411 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.411 ===================================================== 00:14:23.411 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:23.411 ===================================================== 00:14:23.411 Controller Capabilities/Features 00:14:23.411 ================================ 00:14:23.411 Vendor ID: 1b36 00:14:23.411 Subsystem Vendor ID: 1af4 00:14:23.411 Serial Number: 12342 00:14:23.411 Model Number: QEMU NVMe Ctrl 00:14:23.411 Firmware Version: 8.0.0 00:14:23.411 Recommended Arb Burst: 6 00:14:23.411 IEEE OUI Identifier: 00 54 52 00:14:23.411 Multi-path I/O 00:14:23.411 May have multiple subsystem ports: No 00:14:23.412 May have multiple controllers: No 00:14:23.412 Associated with SR-IOV VF: No 00:14:23.412 Max Data Transfer Size: 524288 00:14:23.412 Max Number of Namespaces: 256 00:14:23.412 Max Number of I/O Queues: 64 00:14:23.412 NVMe Specification Version (VS): 1.4 00:14:23.412 NVMe Specification Version (Identify): 1.4 00:14:23.412 Maximum Queue Entries: 2048 00:14:23.412 Contiguous Queues Required: Yes 00:14:23.412 Arbitration Mechanisms Supported 00:14:23.412 Weighted Round Robin: Not Supported 00:14:23.412 Vendor Specific: Not Supported 00:14:23.412 Reset Timeout: 7500 ms 00:14:23.412 Doorbell Stride: 4 bytes 00:14:23.412 NVM Subsystem Reset: Not Supported 00:14:23.412 Command Sets Supported 00:14:23.412 NVM Command Set: Supported 00:14:23.412 Boot Partition: Not Supported 00:14:23.412 Memory Page Size Minimum: 4096 bytes 00:14:23.412 Memory Page Size Maximum: 65536 bytes 00:14:23.412 Persistent Memory Region: Not Supported 00:14:23.412 Optional Asynchronous Events Supported 00:14:23.412 Namespace Attribute Notices: Supported 00:14:23.412 Firmware Activation Notices: Not Supported 00:14:23.412 ANA Change Notices: Not Supported 00:14:23.412 PLE Aggregate Log Change Notices: Not Supported 00:14:23.412 LBA Status Info Alert Notices: Not Supported 00:14:23.412 EGE Aggregate Log Change Notices: Not Supported 00:14:23.412 Normal NVM Subsystem Shutdown event: Not Supported 00:14:23.412 Zone Descriptor Change Notices: Not Supported 00:14:23.412 Discovery Log Change Notices: Not Supported 00:14:23.412 Controller Attributes 00:14:23.412 128-bit Host Identifier: Not Supported 00:14:23.412 Non-Operational Permissive Mode: Not Supported 00:14:23.412 NVM Sets: Not Supported 00:14:23.412 Read Recovery Levels: Not Supported 00:14:23.412 Endurance Groups: Not Supported 00:14:23.412 Predictable Latency Mode: Not Supported 00:14:23.412 Traffic Based Keep ALive: Not Supported 00:14:23.412 Namespace Granularity: Not Supported 00:14:23.412 SQ Associations: Not Supported 00:14:23.412 UUID List: Not Supported 00:14:23.412 Multi-Domain Subsystem: Not Supported 00:14:23.412 Fixed Capacity Management: Not Supported 00:14:23.412 Variable Capacity Management: Not Supported 00:14:23.412 Delete Endurance Group: Not Supported 00:14:23.412 Delete NVM Set: Not Supported 00:14:23.412 Extended LBA Formats Supported: Supported 00:14:23.412 Flexible Data Placement Supported: Not Supported 00:14:23.412 00:14:23.412 Controller Memory Buffer Support 00:14:23.412 ================================ 00:14:23.412 Supported: No 00:14:23.412 00:14:23.412 Persistent Memory Region Support 00:14:23.412 ================================ 00:14:23.412 Supported: No 00:14:23.412 00:14:23.412 Admin Command Set Attributes 00:14:23.412 ============================ 00:14:23.412 Security Send/Receive: Not Supported 00:14:23.412 Format NVM: Supported 00:14:23.412 Firmware Activate/Download: Not Supported 00:14:23.412 Namespace Management: Supported 00:14:23.412 Device Self-Test: Not Supported 00:14:23.412 Directives: Supported 00:14:23.412 NVMe-MI: Not Supported 00:14:23.412 Virtualization Management: Not Supported 00:14:23.412 Doorbell Buffer Config: Supported 00:14:23.412 Get LBA Status Capability: Not Supported 00:14:23.412 Command & Feature Lockdown Capability: Not Supported 00:14:23.412 Abort Command Limit: 4 00:14:23.412 Async Event Request Limit: 4 00:14:23.412 Number of Firmware Slots: N/A 00:14:23.412 Firmware Slot 1 Read-Only: N/A 00:14:23.412 Firmware Activation Without Reset: N/A 00:14:23.412 Multiple Update Detection Support: N/A 00:14:23.412 Firmware Update Granularity: No Information Provided 00:14:23.412 Per-Namespace SMART Log: Yes 00:14:23.412 Asymmetric Namespace Access Log Page: Not Supported 00:14:23.412 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:14:23.412 Command Effects Log Page: Supported 00:14:23.412 Get Log Page Extended Data: Supported 00:14:23.412 Telemetry Log Pages: Not Supported 00:14:23.412 Persistent Event Log Pages: Not Supported 00:14:23.412 Supported Log Pages Log Page: May Support 00:14:23.412 Commands Supported & Effects Log Page: Not Supported 00:14:23.412 Feature Identifiers & Effects Log Page:May Support 00:14:23.412 NVMe-MI Commands & Effects Log Page: May Support 00:14:23.412 Data Area 4 for Telemetry Log: Not Supported 00:14:23.412 Error Log Page Entries Supported: 1 00:14:23.412 Keep Alive: Not Supported 00:14:23.412 00:14:23.412 NVM Command Set Attributes 00:14:23.412 ========================== 00:14:23.412 Submission Queue Entry Size 00:14:23.412 Max: 64 00:14:23.412 Min: 64 00:14:23.412 Completion Queue Entry Size 00:14:23.412 Max: 16 00:14:23.412 Min: 16 00:14:23.412 Number of Namespaces: 256 00:14:23.412 Compare Command: Supported 00:14:23.412 Write Uncorrectable Command: Not Supported 00:14:23.412 Dataset Management Command: Supported 00:14:23.412 Write Zeroes Command: Supported 00:14:23.412 Set Features Save Field: Supported 00:14:23.412 Reservations: Not Supported 00:14:23.412 Timestamp: Supported 00:14:23.412 Copy: Supported 00:14:23.412 Volatile Write Cache: Present 00:14:23.412 Atomic Write Unit (Normal): 1 00:14:23.412 Atomic Write Unit (PFail): 1 00:14:23.412 Atomic Compare & Write Unit: 1 00:14:23.412 Fused Compare & Write: Not Supported 00:14:23.412 Scatter-Gather List 00:14:23.412 SGL Command Set: Supported 00:14:23.412 SGL Keyed: Not Supported 00:14:23.412 SGL Bit Bucket Descriptor: Not Supported 00:14:23.412 SGL Metadata Pointer: Not Supported 00:14:23.412 Oversized SGL: Not Supported 00:14:23.412 SGL Metadata Address: Not Supported 00:14:23.412 SGL Offset: Not Supported 00:14:23.413 Transport SGL Data Block: Not Supported 00:14:23.413 Replay Protected Memory Block: Not Supported 00:14:23.413 00:14:23.413 Firmware Slot Information 00:14:23.413 ========================= 00:14:23.413 Active slot: 1 00:14:23.413 Slot 1 Firmware Revision: 1.0 00:14:23.413 00:14:23.413 00:14:23.413 Commands Supported and Effects 00:14:23.413 ============================== 00:14:23.413 Admin Commands 00:14:23.413 -------------- 00:14:23.413 Delete I/O Submission Queue (00h): Supported 00:14:23.413 Create I/O Submission Queue (01h): Supported 00:14:23.413 Get Log Page (02h): Supported 00:14:23.413 Delete I/O Completion Queue (04h): Supported 00:14:23.413 Create I/O Completion Queue (05h): Supported 00:14:23.413 Identify (06h): Supported 00:14:23.413 Abort (08h): Supported 00:14:23.413 Set Features (09h): Supported 00:14:23.413 Get Features (0Ah): Supported 00:14:23.413 Asynchronous Event Request (0Ch): Supported 00:14:23.413 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:23.413 Directive Send (19h): Supported 00:14:23.413 Directive Receive (1Ah): Supported 00:14:23.413 Virtualization Management (1Ch): Supported 00:14:23.413 Doorbell Buffer Config (7Ch): Supported 00:14:23.413 Format NVM (80h): Supported LBA-Change 00:14:23.413 I/O Commands 00:14:23.413 ------------ 00:14:23.413 Flush (00h): Supported LBA-Change 00:14:23.413 Write (01h): Supported LBA-Change 00:14:23.413 Read (02h): Supported 00:14:23.413 Compare (05h): Supported 00:14:23.413 Write Zeroes (08h): Supported LBA-Change 00:14:23.413 Dataset Management (09h): Supported LBA-Change 00:14:23.413 Unknown (0Ch): Supported 00:14:23.413 Unknown (12h): Supported 00:14:23.413 Copy (19h): Supported LBA-Change 00:14:23.413 Unknown (1Dh): Supported LBA-Change 00:14:23.413 00:14:23.413 Error Log 00:14:23.413 ========= 00:14:23.413 00:14:23.413 Arbitration 00:14:23.413 =========== 00:14:23.413 Arbitration Burst: no limit 00:14:23.413 00:14:23.413 Power Management 00:14:23.413 ================ 00:14:23.413 Number of Power States: 1 00:14:23.413 Current Power State: Power State #0 00:14:23.413 Power State #0: 00:14:23.413 Max Power: 25.00 W 00:14:23.413 Non-Operational State: Operational 00:14:23.413 Entry Latency: 16 microseconds 00:14:23.413 Exit Latency: 4 microseconds 00:14:23.413 Relative Read Throughput: 0 00:14:23.413 Relative Read Latency: 0 00:14:23.413 Relative Write Throughput: 0 00:14:23.413 Relative Write Latency: 0 00:14:23.413 Idle Power: Not Reported 00:14:23.413 Active Power: Not Reported 00:14:23.413 Non-Operational Permissive Mode: Not Supported 00:14:23.413 00:14:23.413 Health Information 00:14:23.413 ================== 00:14:23.413 Critical Warnings: 00:14:23.413 Available Spare Space: OK 00:14:23.413 Temperature: OK 00:14:23.413 Device Reliability: OK 00:14:23.413 Read Only: No 00:14:23.413 Volatile Memory Backup: OK 00:14:23.413 Current Temperature: 323 Kelvin (50 Celsius) 00:14:23.413 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:23.413 Available Spare: 0% 00:14:23.413 Available Spare Threshold: 0% 00:14:23.413 Life Percentage Used: 0% 00:14:23.413 Data Units Read: 2440 00:14:23.413 Data Units Written: 2228 00:14:23.413 Host Read Commands: 103725 00:14:23.413 Host Write Commands: 101994 00:14:23.413 Controller Busy Time: 0 minutes 00:14:23.413 Power Cycles: 0 00:14:23.413 Power On Hours: 0 hours 00:14:23.413 Unsafe Shutdowns: 0 00:14:23.413 Unrecoverable Media Errors: 0 00:14:23.413 Lifetime Error Log Entries: 0 00:14:23.413 Warning Temperature Time: 0 minutes 00:14:23.413 Critical Temperature Time: 0 minutes 00:14:23.413 00:14:23.413 Number of Queues 00:14:23.413 ================ 00:14:23.413 Number of I/O Submission Queues: 64 00:14:23.413 Number of I/O Completion Queues: 64 00:14:23.413 00:14:23.413 ZNS Specific Controller Data 00:14:23.413 ============================ 00:14:23.413 Zone Append Size Limit: 0 00:14:23.413 00:14:23.413 00:14:23.413 Active Namespaces 00:14:23.413 ================= 00:14:23.413 Namespace ID:1 00:14:23.413 Error Recovery Timeout: Unlimited 00:14:23.413 Command Set Identifier: NVM (00h) 00:14:23.413 Deallocate: Supported 00:14:23.413 Deallocated/Unwritten Error: Supported 00:14:23.413 Deallocated Read Value: All 0x00 00:14:23.413 Deallocate in Write Zeroes: Not Supported 00:14:23.413 Deallocated Guard Field: 0xFFFF 00:14:23.413 Flush: Supported 00:14:23.413 Reservation: Not Supported 00:14:23.413 Namespace Sharing Capabilities: Private 00:14:23.413 Size (in LBAs): 1048576 (4GiB) 00:14:23.413 Capacity (in LBAs): 1048576 (4GiB) 00:14:23.413 Utilization (in LBAs): 1048576 (4GiB) 00:14:23.413 Thin Provisioning: Not Supported 00:14:23.413 Per-NS Atomic Units: No 00:14:23.413 Maximum Single Source Range Length: 128 00:14:23.413 Maximum Copy Length: 128 00:14:23.413 Maximum Source Range Count: 128 00:14:23.413 NGUID/EUI64 Never Reused: No 00:14:23.413 Namespace Write Protected: No 00:14:23.413 Number of LBA Formats: 8 00:14:23.413 Current LBA Format: LBA Format #04 00:14:23.413 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:23.413 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:23.413 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:23.413 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:23.413 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:23.413 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:23.413 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:23.413 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:23.413 00:14:23.413 NVM Specific Namespace Data 00:14:23.413 =========================== 00:14:23.413 Logical Block Storage Tag Mask: 0 00:14:23.413 Protection Information Capabilities: 00:14:23.413 16b Guard Protection Information Storage Tag Support: No 00:14:23.413 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:23.413 Storage Tag Check Read Support: No 00:14:23.413 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.413 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.413 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.413 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.413 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.413 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.413 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.413 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.413 Namespace ID:2 00:14:23.413 Error Recovery Timeout: Unlimited 00:14:23.413 Command Set Identifier: NVM (00h) 00:14:23.413 Deallocate: Supported 00:14:23.413 Deallocated/Unwritten Error: Supported 00:14:23.413 Deallocated Read Value: All 0x00 00:14:23.413 Deallocate in Write Zeroes: Not Supported 00:14:23.413 Deallocated Guard Field: 0xFFFF 00:14:23.413 Flush: Supported 00:14:23.413 Reservation: Not Supported 00:14:23.413 Namespace Sharing Capabilities: Private 00:14:23.414 Size (in LBAs): 1048576 (4GiB) 00:14:23.414 Capacity (in LBAs): 1048576 (4GiB) 00:14:23.414 Utilization (in LBAs): 1048576 (4GiB) 00:14:23.414 Thin Provisioning: Not Supported 00:14:23.414 Per-NS Atomic Units: No 00:14:23.414 Maximum Single Source Range Length: 128 00:14:23.414 Maximum Copy Length: 128 00:14:23.414 Maximum Source Range Count: 128 00:14:23.414 NGUID/EUI64 Never Reused: No 00:14:23.414 Namespace Write Protected: No 00:14:23.414 Number of LBA Formats: 8 00:14:23.414 Current LBA Format: LBA Format #04 00:14:23.414 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:23.414 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:23.414 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:23.414 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:23.414 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:23.414 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:23.414 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:23.414 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:23.414 00:14:23.414 NVM Specific Namespace Data 00:14:23.414 =========================== 00:14:23.414 Logical Block Storage Tag Mask: 0 00:14:23.414 Protection Information Capabilities: 00:14:23.414 16b Guard Protection Information Storage Tag Support: No 00:14:23.414 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:23.414 Storage Tag Check Read Support: No 00:14:23.414 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.414 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.414 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.414 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.414 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.414 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.414 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.414 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.414 Namespace ID:3 00:14:23.414 Error Recovery Timeout: Unlimited 00:14:23.414 Command Set Identifier: NVM (00h) 00:14:23.414 Deallocate: Supported 00:14:23.414 Deallocated/Unwritten Error: Supported 00:14:23.414 Deallocated Read Value: All 0x00 00:14:23.414 Deallocate in Write Zeroes: Not Supported 00:14:23.414 Deallocated Guard Field: 0xFFFF 00:14:23.414 Flush: Supported 00:14:23.414 Reservation: Not Supported 00:14:23.414 Namespace Sharing Capabilities: Private 00:14:23.414 Size (in LBAs): 1048576 (4GiB) 00:14:23.697 Capacity (in LBAs): 1048576 (4GiB) 00:14:23.697 Utilization (in LBAs): 1048576 (4GiB) 00:14:23.697 Thin Provisioning: Not Supported 00:14:23.697 Per-NS Atomic Units: No 00:14:23.697 Maximum Single Source Range Length: 128 00:14:23.697 Maximum Copy Length: 128 00:14:23.697 Maximum Source Range Count: 128 00:14:23.697 NGUID/EUI64 Never Reused: No 00:14:23.697 Namespace Write Protected: No 00:14:23.697 Number of LBA Formats: 8 00:14:23.697 Current LBA Format: LBA Format #04 00:14:23.697 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:23.697 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:23.697 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:23.697 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:23.697 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:23.697 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:23.697 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:23.697 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:23.697 00:14:23.697 NVM Specific Namespace Data 00:14:23.697 =========================== 00:14:23.697 Logical Block Storage Tag Mask: 0 00:14:23.697 Protection Information Capabilities: 00:14:23.697 16b Guard Protection Information Storage Tag Support: No 00:14:23.697 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:23.697 Storage Tag Check Read Support: No 00:14:23.697 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.697 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.697 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.697 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.697 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.697 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.697 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.697 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.697 13:33:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:23.697 13:33:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:14:23.957 ===================================================== 00:14:23.957 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:23.957 ===================================================== 00:14:23.957 Controller Capabilities/Features 00:14:23.957 ================================ 00:14:23.957 Vendor ID: 1b36 00:14:23.957 Subsystem Vendor ID: 1af4 00:14:23.957 Serial Number: 12340 00:14:23.957 Model Number: QEMU NVMe Ctrl 00:14:23.957 Firmware Version: 8.0.0 00:14:23.957 Recommended Arb Burst: 6 00:14:23.957 IEEE OUI Identifier: 00 54 52 00:14:23.957 Multi-path I/O 00:14:23.957 May have multiple subsystem ports: No 00:14:23.957 May have multiple controllers: No 00:14:23.957 Associated with SR-IOV VF: No 00:14:23.957 Max Data Transfer Size: 524288 00:14:23.957 Max Number of Namespaces: 256 00:14:23.957 Max Number of I/O Queues: 64 00:14:23.957 NVMe Specification Version (VS): 1.4 00:14:23.957 NVMe Specification Version (Identify): 1.4 00:14:23.957 Maximum Queue Entries: 2048 00:14:23.957 Contiguous Queues Required: Yes 00:14:23.957 Arbitration Mechanisms Supported 00:14:23.957 Weighted Round Robin: Not Supported 00:14:23.957 Vendor Specific: Not Supported 00:14:23.957 Reset Timeout: 7500 ms 00:14:23.957 Doorbell Stride: 4 bytes 00:14:23.957 NVM Subsystem Reset: Not Supported 00:14:23.957 Command Sets Supported 00:14:23.957 NVM Command Set: Supported 00:14:23.957 Boot Partition: Not Supported 00:14:23.957 Memory Page Size Minimum: 4096 bytes 00:14:23.957 Memory Page Size Maximum: 65536 bytes 00:14:23.957 Persistent Memory Region: Not Supported 00:14:23.957 Optional Asynchronous Events Supported 00:14:23.957 Namespace Attribute Notices: Supported 00:14:23.957 Firmware Activation Notices: Not Supported 00:14:23.957 ANA Change Notices: Not Supported 00:14:23.957 PLE Aggregate Log Change Notices: Not Supported 00:14:23.957 LBA Status Info Alert Notices: Not Supported 00:14:23.957 EGE Aggregate Log Change Notices: Not Supported 00:14:23.957 Normal NVM Subsystem Shutdown event: Not Supported 00:14:23.957 Zone Descriptor Change Notices: Not Supported 00:14:23.957 Discovery Log Change Notices: Not Supported 00:14:23.957 Controller Attributes 00:14:23.957 128-bit Host Identifier: Not Supported 00:14:23.957 Non-Operational Permissive Mode: Not Supported 00:14:23.957 NVM Sets: Not Supported 00:14:23.957 Read Recovery Levels: Not Supported 00:14:23.957 Endurance Groups: Not Supported 00:14:23.957 Predictable Latency Mode: Not Supported 00:14:23.957 Traffic Based Keep ALive: Not Supported 00:14:23.957 Namespace Granularity: Not Supported 00:14:23.957 SQ Associations: Not Supported 00:14:23.957 UUID List: Not Supported 00:14:23.957 Multi-Domain Subsystem: Not Supported 00:14:23.957 Fixed Capacity Management: Not Supported 00:14:23.957 Variable Capacity Management: Not Supported 00:14:23.957 Delete Endurance Group: Not Supported 00:14:23.957 Delete NVM Set: Not Supported 00:14:23.957 Extended LBA Formats Supported: Supported 00:14:23.957 Flexible Data Placement Supported: Not Supported 00:14:23.957 00:14:23.957 Controller Memory Buffer Support 00:14:23.957 ================================ 00:14:23.957 Supported: No 00:14:23.957 00:14:23.957 Persistent Memory Region Support 00:14:23.957 ================================ 00:14:23.957 Supported: No 00:14:23.957 00:14:23.957 Admin Command Set Attributes 00:14:23.957 ============================ 00:14:23.957 Security Send/Receive: Not Supported 00:14:23.957 Format NVM: Supported 00:14:23.957 Firmware Activate/Download: Not Supported 00:14:23.957 Namespace Management: Supported 00:14:23.957 Device Self-Test: Not Supported 00:14:23.957 Directives: Supported 00:14:23.957 NVMe-MI: Not Supported 00:14:23.957 Virtualization Management: Not Supported 00:14:23.958 Doorbell Buffer Config: Supported 00:14:23.958 Get LBA Status Capability: Not Supported 00:14:23.958 Command & Feature Lockdown Capability: Not Supported 00:14:23.958 Abort Command Limit: 4 00:14:23.958 Async Event Request Limit: 4 00:14:23.958 Number of Firmware Slots: N/A 00:14:23.958 Firmware Slot 1 Read-Only: N/A 00:14:23.958 Firmware Activation Without Reset: N/A 00:14:23.958 Multiple Update Detection Support: N/A 00:14:23.958 Firmware Update Granularity: No Information Provided 00:14:23.958 Per-Namespace SMART Log: Yes 00:14:23.958 Asymmetric Namespace Access Log Page: Not Supported 00:14:23.958 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:14:23.958 Command Effects Log Page: Supported 00:14:23.958 Get Log Page Extended Data: Supported 00:14:23.958 Telemetry Log Pages: Not Supported 00:14:23.958 Persistent Event Log Pages: Not Supported 00:14:23.958 Supported Log Pages Log Page: May Support 00:14:23.958 Commands Supported & Effects Log Page: Not Supported 00:14:23.958 Feature Identifiers & Effects Log Page:May Support 00:14:23.958 NVMe-MI Commands & Effects Log Page: May Support 00:14:23.958 Data Area 4 for Telemetry Log: Not Supported 00:14:23.958 Error Log Page Entries Supported: 1 00:14:23.958 Keep Alive: Not Supported 00:14:23.958 00:14:23.958 NVM Command Set Attributes 00:14:23.958 ========================== 00:14:23.958 Submission Queue Entry Size 00:14:23.958 Max: 64 00:14:23.958 Min: 64 00:14:23.958 Completion Queue Entry Size 00:14:23.958 Max: 16 00:14:23.958 Min: 16 00:14:23.958 Number of Namespaces: 256 00:14:23.958 Compare Command: Supported 00:14:23.958 Write Uncorrectable Command: Not Supported 00:14:23.958 Dataset Management Command: Supported 00:14:23.958 Write Zeroes Command: Supported 00:14:23.958 Set Features Save Field: Supported 00:14:23.958 Reservations: Not Supported 00:14:23.958 Timestamp: Supported 00:14:23.958 Copy: Supported 00:14:23.958 Volatile Write Cache: Present 00:14:23.958 Atomic Write Unit (Normal): 1 00:14:23.958 Atomic Write Unit (PFail): 1 00:14:23.958 Atomic Compare & Write Unit: 1 00:14:23.958 Fused Compare & Write: Not Supported 00:14:23.958 Scatter-Gather List 00:14:23.958 SGL Command Set: Supported 00:14:23.958 SGL Keyed: Not Supported 00:14:23.958 SGL Bit Bucket Descriptor: Not Supported 00:14:23.958 SGL Metadata Pointer: Not Supported 00:14:23.958 Oversized SGL: Not Supported 00:14:23.958 SGL Metadata Address: Not Supported 00:14:23.958 SGL Offset: Not Supported 00:14:23.958 Transport SGL Data Block: Not Supported 00:14:23.958 Replay Protected Memory Block: Not Supported 00:14:23.958 00:14:23.958 Firmware Slot Information 00:14:23.958 ========================= 00:14:23.958 Active slot: 1 00:14:23.958 Slot 1 Firmware Revision: 1.0 00:14:23.958 00:14:23.958 00:14:23.958 Commands Supported and Effects 00:14:23.958 ============================== 00:14:23.958 Admin Commands 00:14:23.958 -------------- 00:14:23.958 Delete I/O Submission Queue (00h): Supported 00:14:23.958 Create I/O Submission Queue (01h): Supported 00:14:23.958 Get Log Page (02h): Supported 00:14:23.958 Delete I/O Completion Queue (04h): Supported 00:14:23.958 Create I/O Completion Queue (05h): Supported 00:14:23.958 Identify (06h): Supported 00:14:23.958 Abort (08h): Supported 00:14:23.958 Set Features (09h): Supported 00:14:23.958 Get Features (0Ah): Supported 00:14:23.958 Asynchronous Event Request (0Ch): Supported 00:14:23.958 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:23.958 Directive Send (19h): Supported 00:14:23.958 Directive Receive (1Ah): Supported 00:14:23.958 Virtualization Management (1Ch): Supported 00:14:23.958 Doorbell Buffer Config (7Ch): Supported 00:14:23.958 Format NVM (80h): Supported LBA-Change 00:14:23.958 I/O Commands 00:14:23.958 ------------ 00:14:23.958 Flush (00h): Supported LBA-Change 00:14:23.958 Write (01h): Supported LBA-Change 00:14:23.958 Read (02h): Supported 00:14:23.958 Compare (05h): Supported 00:14:23.958 Write Zeroes (08h): Supported LBA-Change 00:14:23.958 Dataset Management (09h): Supported LBA-Change 00:14:23.958 Unknown (0Ch): Supported 00:14:23.958 Unknown (12h): Supported 00:14:23.958 Copy (19h): Supported LBA-Change 00:14:23.958 Unknown (1Dh): Supported LBA-Change 00:14:23.958 00:14:23.958 Error Log 00:14:23.958 ========= 00:14:23.958 00:14:23.958 Arbitration 00:14:23.958 =========== 00:14:23.958 Arbitration Burst: no limit 00:14:23.958 00:14:23.958 Power Management 00:14:23.958 ================ 00:14:23.958 Number of Power States: 1 00:14:23.958 Current Power State: Power State #0 00:14:23.958 Power State #0: 00:14:23.958 Max Power: 25.00 W 00:14:23.958 Non-Operational State: Operational 00:14:23.958 Entry Latency: 16 microseconds 00:14:23.958 Exit Latency: 4 microseconds 00:14:23.958 Relative Read Throughput: 0 00:14:23.958 Relative Read Latency: 0 00:14:23.958 Relative Write Throughput: 0 00:14:23.958 Relative Write Latency: 0 00:14:23.958 Idle Power: Not Reported 00:14:23.958 Active Power: Not Reported 00:14:23.958 Non-Operational Permissive Mode: Not Supported 00:14:23.958 00:14:23.958 Health Information 00:14:23.958 ================== 00:14:23.958 Critical Warnings: 00:14:23.958 Available Spare Space: OK 00:14:23.958 Temperature: OK 00:14:23.958 Device Reliability: OK 00:14:23.958 Read Only: No 00:14:23.958 Volatile Memory Backup: OK 00:14:23.958 Current Temperature: 323 Kelvin (50 Celsius) 00:14:23.958 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:23.958 Available Spare: 0% 00:14:23.958 Available Spare Threshold: 0% 00:14:23.958 Life Percentage Used: 0% 00:14:23.958 Data Units Read: 746 00:14:23.958 Data Units Written: 674 00:14:23.958 Host Read Commands: 33803 00:14:23.958 Host Write Commands: 33589 00:14:23.958 Controller Busy Time: 0 minutes 00:14:23.958 Power Cycles: 0 00:14:23.958 Power On Hours: 0 hours 00:14:23.958 Unsafe Shutdowns: 0 00:14:23.958 Unrecoverable Media Errors: 0 00:14:23.958 Lifetime Error Log Entries: 0 00:14:23.958 Warning Temperature Time: 0 minutes 00:14:23.958 Critical Temperature Time: 0 minutes 00:14:23.958 00:14:23.958 Number of Queues 00:14:23.958 ================ 00:14:23.958 Number of I/O Submission Queues: 64 00:14:23.958 Number of I/O Completion Queues: 64 00:14:23.958 00:14:23.958 ZNS Specific Controller Data 00:14:23.958 ============================ 00:14:23.958 Zone Append Size Limit: 0 00:14:23.958 00:14:23.958 00:14:23.958 Active Namespaces 00:14:23.958 ================= 00:14:23.958 Namespace ID:1 00:14:23.958 Error Recovery Timeout: Unlimited 00:14:23.958 Command Set Identifier: NVM (00h) 00:14:23.958 Deallocate: Supported 00:14:23.958 Deallocated/Unwritten Error: Supported 00:14:23.958 Deallocated Read Value: All 0x00 00:14:23.958 Deallocate in Write Zeroes: Not Supported 00:14:23.958 Deallocated Guard Field: 0xFFFF 00:14:23.958 Flush: Supported 00:14:23.958 Reservation: Not Supported 00:14:23.958 Metadata Transferred as: Separate Metadata Buffer 00:14:23.958 Namespace Sharing Capabilities: Private 00:14:23.958 Size (in LBAs): 1548666 (5GiB) 00:14:23.958 Capacity (in LBAs): 1548666 (5GiB) 00:14:23.958 Utilization (in LBAs): 1548666 (5GiB) 00:14:23.958 Thin Provisioning: Not Supported 00:14:23.958 Per-NS Atomic Units: No 00:14:23.958 Maximum Single Source Range Length: 128 00:14:23.958 Maximum Copy Length: 128 00:14:23.958 Maximum Source Range Count: 128 00:14:23.958 NGUID/EUI64 Never Reused: No 00:14:23.958 Namespace Write Protected: No 00:14:23.958 Number of LBA Formats: 8 00:14:23.958 Current LBA Format: LBA Format #07 00:14:23.958 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:23.958 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:23.958 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:23.958 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:23.958 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:23.958 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:23.958 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:23.958 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:23.958 00:14:23.958 NVM Specific Namespace Data 00:14:23.958 =========================== 00:14:23.958 Logical Block Storage Tag Mask: 0 00:14:23.958 Protection Information Capabilities: 00:14:23.958 16b Guard Protection Information Storage Tag Support: No 00:14:23.958 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:23.958 Storage Tag Check Read Support: No 00:14:23.958 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.958 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.958 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.958 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.959 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.959 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.959 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.959 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:23.959 13:33:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:23.959 13:33:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:14:24.219 ===================================================== 00:14:24.219 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:24.219 ===================================================== 00:14:24.219 Controller Capabilities/Features 00:14:24.219 ================================ 00:14:24.219 Vendor ID: 1b36 00:14:24.219 Subsystem Vendor ID: 1af4 00:14:24.219 Serial Number: 12341 00:14:24.219 Model Number: QEMU NVMe Ctrl 00:14:24.219 Firmware Version: 8.0.0 00:14:24.219 Recommended Arb Burst: 6 00:14:24.219 IEEE OUI Identifier: 00 54 52 00:14:24.219 Multi-path I/O 00:14:24.219 May have multiple subsystem ports: No 00:14:24.219 May have multiple controllers: No 00:14:24.219 Associated with SR-IOV VF: No 00:14:24.219 Max Data Transfer Size: 524288 00:14:24.219 Max Number of Namespaces: 256 00:14:24.219 Max Number of I/O Queues: 64 00:14:24.219 NVMe Specification Version (VS): 1.4 00:14:24.219 NVMe Specification Version (Identify): 1.4 00:14:24.219 Maximum Queue Entries: 2048 00:14:24.219 Contiguous Queues Required: Yes 00:14:24.219 Arbitration Mechanisms Supported 00:14:24.219 Weighted Round Robin: Not Supported 00:14:24.219 Vendor Specific: Not Supported 00:14:24.219 Reset Timeout: 7500 ms 00:14:24.219 Doorbell Stride: 4 bytes 00:14:24.219 NVM Subsystem Reset: Not Supported 00:14:24.219 Command Sets Supported 00:14:24.219 NVM Command Set: Supported 00:14:24.219 Boot Partition: Not Supported 00:14:24.219 Memory Page Size Minimum: 4096 bytes 00:14:24.219 Memory Page Size Maximum: 65536 bytes 00:14:24.219 Persistent Memory Region: Not Supported 00:14:24.219 Optional Asynchronous Events Supported 00:14:24.219 Namespace Attribute Notices: Supported 00:14:24.219 Firmware Activation Notices: Not Supported 00:14:24.219 ANA Change Notices: Not Supported 00:14:24.219 PLE Aggregate Log Change Notices: Not Supported 00:14:24.219 LBA Status Info Alert Notices: Not Supported 00:14:24.219 EGE Aggregate Log Change Notices: Not Supported 00:14:24.219 Normal NVM Subsystem Shutdown event: Not Supported 00:14:24.219 Zone Descriptor Change Notices: Not Supported 00:14:24.219 Discovery Log Change Notices: Not Supported 00:14:24.219 Controller Attributes 00:14:24.219 128-bit Host Identifier: Not Supported 00:14:24.219 Non-Operational Permissive Mode: Not Supported 00:14:24.219 NVM Sets: Not Supported 00:14:24.219 Read Recovery Levels: Not Supported 00:14:24.219 Endurance Groups: Not Supported 00:14:24.219 Predictable Latency Mode: Not Supported 00:14:24.219 Traffic Based Keep ALive: Not Supported 00:14:24.219 Namespace Granularity: Not Supported 00:14:24.219 SQ Associations: Not Supported 00:14:24.219 UUID List: Not Supported 00:14:24.219 Multi-Domain Subsystem: Not Supported 00:14:24.219 Fixed Capacity Management: Not Supported 00:14:24.219 Variable Capacity Management: Not Supported 00:14:24.219 Delete Endurance Group: Not Supported 00:14:24.219 Delete NVM Set: Not Supported 00:14:24.219 Extended LBA Formats Supported: Supported 00:14:24.219 Flexible Data Placement Supported: Not Supported 00:14:24.219 00:14:24.219 Controller Memory Buffer Support 00:14:24.219 ================================ 00:14:24.219 Supported: No 00:14:24.219 00:14:24.219 Persistent Memory Region Support 00:14:24.219 ================================ 00:14:24.219 Supported: No 00:14:24.219 00:14:24.219 Admin Command Set Attributes 00:14:24.219 ============================ 00:14:24.219 Security Send/Receive: Not Supported 00:14:24.219 Format NVM: Supported 00:14:24.219 Firmware Activate/Download: Not Supported 00:14:24.219 Namespace Management: Supported 00:14:24.219 Device Self-Test: Not Supported 00:14:24.219 Directives: Supported 00:14:24.219 NVMe-MI: Not Supported 00:14:24.219 Virtualization Management: Not Supported 00:14:24.219 Doorbell Buffer Config: Supported 00:14:24.219 Get LBA Status Capability: Not Supported 00:14:24.219 Command & Feature Lockdown Capability: Not Supported 00:14:24.219 Abort Command Limit: 4 00:14:24.219 Async Event Request Limit: 4 00:14:24.219 Number of Firmware Slots: N/A 00:14:24.219 Firmware Slot 1 Read-Only: N/A 00:14:24.219 Firmware Activation Without Reset: N/A 00:14:24.219 Multiple Update Detection Support: N/A 00:14:24.219 Firmware Update Granularity: No Information Provided 00:14:24.219 Per-Namespace SMART Log: Yes 00:14:24.219 Asymmetric Namespace Access Log Page: Not Supported 00:14:24.219 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:14:24.219 Command Effects Log Page: Supported 00:14:24.219 Get Log Page Extended Data: Supported 00:14:24.219 Telemetry Log Pages: Not Supported 00:14:24.219 Persistent Event Log Pages: Not Supported 00:14:24.219 Supported Log Pages Log Page: May Support 00:14:24.219 Commands Supported & Effects Log Page: Not Supported 00:14:24.219 Feature Identifiers & Effects Log Page:May Support 00:14:24.219 NVMe-MI Commands & Effects Log Page: May Support 00:14:24.219 Data Area 4 for Telemetry Log: Not Supported 00:14:24.219 Error Log Page Entries Supported: 1 00:14:24.219 Keep Alive: Not Supported 00:14:24.219 00:14:24.219 NVM Command Set Attributes 00:14:24.219 ========================== 00:14:24.219 Submission Queue Entry Size 00:14:24.219 Max: 64 00:14:24.219 Min: 64 00:14:24.219 Completion Queue Entry Size 00:14:24.219 Max: 16 00:14:24.219 Min: 16 00:14:24.219 Number of Namespaces: 256 00:14:24.219 Compare Command: Supported 00:14:24.219 Write Uncorrectable Command: Not Supported 00:14:24.219 Dataset Management Command: Supported 00:14:24.219 Write Zeroes Command: Supported 00:14:24.219 Set Features Save Field: Supported 00:14:24.219 Reservations: Not Supported 00:14:24.219 Timestamp: Supported 00:14:24.219 Copy: Supported 00:14:24.219 Volatile Write Cache: Present 00:14:24.219 Atomic Write Unit (Normal): 1 00:14:24.219 Atomic Write Unit (PFail): 1 00:14:24.219 Atomic Compare & Write Unit: 1 00:14:24.219 Fused Compare & Write: Not Supported 00:14:24.219 Scatter-Gather List 00:14:24.219 SGL Command Set: Supported 00:14:24.219 SGL Keyed: Not Supported 00:14:24.219 SGL Bit Bucket Descriptor: Not Supported 00:14:24.219 SGL Metadata Pointer: Not Supported 00:14:24.219 Oversized SGL: Not Supported 00:14:24.219 SGL Metadata Address: Not Supported 00:14:24.219 SGL Offset: Not Supported 00:14:24.219 Transport SGL Data Block: Not Supported 00:14:24.219 Replay Protected Memory Block: Not Supported 00:14:24.219 00:14:24.219 Firmware Slot Information 00:14:24.219 ========================= 00:14:24.219 Active slot: 1 00:14:24.219 Slot 1 Firmware Revision: 1.0 00:14:24.219 00:14:24.219 00:14:24.219 Commands Supported and Effects 00:14:24.219 ============================== 00:14:24.219 Admin Commands 00:14:24.219 -------------- 00:14:24.219 Delete I/O Submission Queue (00h): Supported 00:14:24.219 Create I/O Submission Queue (01h): Supported 00:14:24.219 Get Log Page (02h): Supported 00:14:24.219 Delete I/O Completion Queue (04h): Supported 00:14:24.219 Create I/O Completion Queue (05h): Supported 00:14:24.219 Identify (06h): Supported 00:14:24.219 Abort (08h): Supported 00:14:24.219 Set Features (09h): Supported 00:14:24.219 Get Features (0Ah): Supported 00:14:24.219 Asynchronous Event Request (0Ch): Supported 00:14:24.220 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:24.220 Directive Send (19h): Supported 00:14:24.220 Directive Receive (1Ah): Supported 00:14:24.220 Virtualization Management (1Ch): Supported 00:14:24.220 Doorbell Buffer Config (7Ch): Supported 00:14:24.220 Format NVM (80h): Supported LBA-Change 00:14:24.220 I/O Commands 00:14:24.220 ------------ 00:14:24.220 Flush (00h): Supported LBA-Change 00:14:24.220 Write (01h): Supported LBA-Change 00:14:24.220 Read (02h): Supported 00:14:24.220 Compare (05h): Supported 00:14:24.220 Write Zeroes (08h): Supported LBA-Change 00:14:24.220 Dataset Management (09h): Supported LBA-Change 00:14:24.220 Unknown (0Ch): Supported 00:14:24.220 Unknown (12h): Supported 00:14:24.220 Copy (19h): Supported LBA-Change 00:14:24.220 Unknown (1Dh): Supported LBA-Change 00:14:24.220 00:14:24.220 Error Log 00:14:24.220 ========= 00:14:24.220 00:14:24.220 Arbitration 00:14:24.220 =========== 00:14:24.220 Arbitration Burst: no limit 00:14:24.220 00:14:24.220 Power Management 00:14:24.220 ================ 00:14:24.220 Number of Power States: 1 00:14:24.220 Current Power State: Power State #0 00:14:24.220 Power State #0: 00:14:24.220 Max Power: 25.00 W 00:14:24.220 Non-Operational State: Operational 00:14:24.220 Entry Latency: 16 microseconds 00:14:24.220 Exit Latency: 4 microseconds 00:14:24.220 Relative Read Throughput: 0 00:14:24.220 Relative Read Latency: 0 00:14:24.220 Relative Write Throughput: 0 00:14:24.220 Relative Write Latency: 0 00:14:24.220 Idle Power: Not Reported 00:14:24.220 Active Power: Not Reported 00:14:24.220 Non-Operational Permissive Mode: Not Supported 00:14:24.220 00:14:24.220 Health Information 00:14:24.220 ================== 00:14:24.220 Critical Warnings: 00:14:24.220 Available Spare Space: OK 00:14:24.220 Temperature: OK 00:14:24.220 Device Reliability: OK 00:14:24.220 Read Only: No 00:14:24.220 Volatile Memory Backup: OK 00:14:24.220 Current Temperature: 323 Kelvin (50 Celsius) 00:14:24.220 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:24.220 Available Spare: 0% 00:14:24.220 Available Spare Threshold: 0% 00:14:24.220 Life Percentage Used: 0% 00:14:24.220 Data Units Read: 1137 00:14:24.220 Data Units Written: 1004 00:14:24.220 Host Read Commands: 51374 00:14:24.220 Host Write Commands: 50155 00:14:24.220 Controller Busy Time: 0 minutes 00:14:24.220 Power Cycles: 0 00:14:24.220 Power On Hours: 0 hours 00:14:24.220 Unsafe Shutdowns: 0 00:14:24.220 Unrecoverable Media Errors: 0 00:14:24.220 Lifetime Error Log Entries: 0 00:14:24.220 Warning Temperature Time: 0 minutes 00:14:24.220 Critical Temperature Time: 0 minutes 00:14:24.220 00:14:24.220 Number of Queues 00:14:24.220 ================ 00:14:24.220 Number of I/O Submission Queues: 64 00:14:24.220 Number of I/O Completion Queues: 64 00:14:24.220 00:14:24.220 ZNS Specific Controller Data 00:14:24.220 ============================ 00:14:24.220 Zone Append Size Limit: 0 00:14:24.220 00:14:24.220 00:14:24.220 Active Namespaces 00:14:24.220 ================= 00:14:24.220 Namespace ID:1 00:14:24.220 Error Recovery Timeout: Unlimited 00:14:24.220 Command Set Identifier: NVM (00h) 00:14:24.220 Deallocate: Supported 00:14:24.220 Deallocated/Unwritten Error: Supported 00:14:24.220 Deallocated Read Value: All 0x00 00:14:24.220 Deallocate in Write Zeroes: Not Supported 00:14:24.220 Deallocated Guard Field: 0xFFFF 00:14:24.220 Flush: Supported 00:14:24.220 Reservation: Not Supported 00:14:24.220 Namespace Sharing Capabilities: Private 00:14:24.220 Size (in LBAs): 1310720 (5GiB) 00:14:24.220 Capacity (in LBAs): 1310720 (5GiB) 00:14:24.220 Utilization (in LBAs): 1310720 (5GiB) 00:14:24.220 Thin Provisioning: Not Supported 00:14:24.220 Per-NS Atomic Units: No 00:14:24.220 Maximum Single Source Range Length: 128 00:14:24.220 Maximum Copy Length: 128 00:14:24.220 Maximum Source Range Count: 128 00:14:24.220 NGUID/EUI64 Never Reused: No 00:14:24.220 Namespace Write Protected: No 00:14:24.220 Number of LBA Formats: 8 00:14:24.220 Current LBA Format: LBA Format #04 00:14:24.220 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:24.220 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:24.220 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:24.220 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:24.220 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:24.220 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:24.220 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:24.220 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:24.220 00:14:24.220 NVM Specific Namespace Data 00:14:24.220 =========================== 00:14:24.220 Logical Block Storage Tag Mask: 0 00:14:24.220 Protection Information Capabilities: 00:14:24.220 16b Guard Protection Information Storage Tag Support: No 00:14:24.220 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:24.220 Storage Tag Check Read Support: No 00:14:24.220 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.220 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.220 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.220 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.220 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.220 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.220 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.220 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.220 13:33:36 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:24.220 13:33:36 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:14:24.481 ===================================================== 00:14:24.481 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:24.481 ===================================================== 00:14:24.481 Controller Capabilities/Features 00:14:24.481 ================================ 00:14:24.481 Vendor ID: 1b36 00:14:24.481 Subsystem Vendor ID: 1af4 00:14:24.481 Serial Number: 12342 00:14:24.481 Model Number: QEMU NVMe Ctrl 00:14:24.481 Firmware Version: 8.0.0 00:14:24.481 Recommended Arb Burst: 6 00:14:24.481 IEEE OUI Identifier: 00 54 52 00:14:24.481 Multi-path I/O 00:14:24.481 May have multiple subsystem ports: No 00:14:24.481 May have multiple controllers: No 00:14:24.481 Associated with SR-IOV VF: No 00:14:24.481 Max Data Transfer Size: 524288 00:14:24.481 Max Number of Namespaces: 256 00:14:24.481 Max Number of I/O Queues: 64 00:14:24.481 NVMe Specification Version (VS): 1.4 00:14:24.481 NVMe Specification Version (Identify): 1.4 00:14:24.481 Maximum Queue Entries: 2048 00:14:24.481 Contiguous Queues Required: Yes 00:14:24.481 Arbitration Mechanisms Supported 00:14:24.481 Weighted Round Robin: Not Supported 00:14:24.481 Vendor Specific: Not Supported 00:14:24.481 Reset Timeout: 7500 ms 00:14:24.481 Doorbell Stride: 4 bytes 00:14:24.481 NVM Subsystem Reset: Not Supported 00:14:24.481 Command Sets Supported 00:14:24.481 NVM Command Set: Supported 00:14:24.481 Boot Partition: Not Supported 00:14:24.481 Memory Page Size Minimum: 4096 bytes 00:14:24.481 Memory Page Size Maximum: 65536 bytes 00:14:24.481 Persistent Memory Region: Not Supported 00:14:24.481 Optional Asynchronous Events Supported 00:14:24.481 Namespace Attribute Notices: Supported 00:14:24.481 Firmware Activation Notices: Not Supported 00:14:24.481 ANA Change Notices: Not Supported 00:14:24.481 PLE Aggregate Log Change Notices: Not Supported 00:14:24.481 LBA Status Info Alert Notices: Not Supported 00:14:24.481 EGE Aggregate Log Change Notices: Not Supported 00:14:24.481 Normal NVM Subsystem Shutdown event: Not Supported 00:14:24.481 Zone Descriptor Change Notices: Not Supported 00:14:24.481 Discovery Log Change Notices: Not Supported 00:14:24.481 Controller Attributes 00:14:24.481 128-bit Host Identifier: Not Supported 00:14:24.481 Non-Operational Permissive Mode: Not Supported 00:14:24.481 NVM Sets: Not Supported 00:14:24.481 Read Recovery Levels: Not Supported 00:14:24.481 Endurance Groups: Not Supported 00:14:24.481 Predictable Latency Mode: Not Supported 00:14:24.481 Traffic Based Keep ALive: Not Supported 00:14:24.481 Namespace Granularity: Not Supported 00:14:24.481 SQ Associations: Not Supported 00:14:24.481 UUID List: Not Supported 00:14:24.481 Multi-Domain Subsystem: Not Supported 00:14:24.481 Fixed Capacity Management: Not Supported 00:14:24.481 Variable Capacity Management: Not Supported 00:14:24.481 Delete Endurance Group: Not Supported 00:14:24.481 Delete NVM Set: Not Supported 00:14:24.481 Extended LBA Formats Supported: Supported 00:14:24.481 Flexible Data Placement Supported: Not Supported 00:14:24.481 00:14:24.481 Controller Memory Buffer Support 00:14:24.481 ================================ 00:14:24.481 Supported: No 00:14:24.481 00:14:24.481 Persistent Memory Region Support 00:14:24.481 ================================ 00:14:24.481 Supported: No 00:14:24.481 00:14:24.481 Admin Command Set Attributes 00:14:24.481 ============================ 00:14:24.481 Security Send/Receive: Not Supported 00:14:24.481 Format NVM: Supported 00:14:24.481 Firmware Activate/Download: Not Supported 00:14:24.481 Namespace Management: Supported 00:14:24.481 Device Self-Test: Not Supported 00:14:24.481 Directives: Supported 00:14:24.481 NVMe-MI: Not Supported 00:14:24.481 Virtualization Management: Not Supported 00:14:24.481 Doorbell Buffer Config: Supported 00:14:24.481 Get LBA Status Capability: Not Supported 00:14:24.481 Command & Feature Lockdown Capability: Not Supported 00:14:24.481 Abort Command Limit: 4 00:14:24.481 Async Event Request Limit: 4 00:14:24.481 Number of Firmware Slots: N/A 00:14:24.481 Firmware Slot 1 Read-Only: N/A 00:14:24.481 Firmware Activation Without Reset: N/A 00:14:24.481 Multiple Update Detection Support: N/A 00:14:24.481 Firmware Update Granularity: No Information Provided 00:14:24.481 Per-Namespace SMART Log: Yes 00:14:24.481 Asymmetric Namespace Access Log Page: Not Supported 00:14:24.481 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:14:24.481 Command Effects Log Page: Supported 00:14:24.481 Get Log Page Extended Data: Supported 00:14:24.481 Telemetry Log Pages: Not Supported 00:14:24.481 Persistent Event Log Pages: Not Supported 00:14:24.481 Supported Log Pages Log Page: May Support 00:14:24.481 Commands Supported & Effects Log Page: Not Supported 00:14:24.481 Feature Identifiers & Effects Log Page:May Support 00:14:24.481 NVMe-MI Commands & Effects Log Page: May Support 00:14:24.481 Data Area 4 for Telemetry Log: Not Supported 00:14:24.481 Error Log Page Entries Supported: 1 00:14:24.481 Keep Alive: Not Supported 00:14:24.481 00:14:24.481 NVM Command Set Attributes 00:14:24.481 ========================== 00:14:24.481 Submission Queue Entry Size 00:14:24.481 Max: 64 00:14:24.481 Min: 64 00:14:24.481 Completion Queue Entry Size 00:14:24.481 Max: 16 00:14:24.481 Min: 16 00:14:24.481 Number of Namespaces: 256 00:14:24.481 Compare Command: Supported 00:14:24.481 Write Uncorrectable Command: Not Supported 00:14:24.481 Dataset Management Command: Supported 00:14:24.481 Write Zeroes Command: Supported 00:14:24.481 Set Features Save Field: Supported 00:14:24.481 Reservations: Not Supported 00:14:24.481 Timestamp: Supported 00:14:24.481 Copy: Supported 00:14:24.481 Volatile Write Cache: Present 00:14:24.481 Atomic Write Unit (Normal): 1 00:14:24.481 Atomic Write Unit (PFail): 1 00:14:24.481 Atomic Compare & Write Unit: 1 00:14:24.481 Fused Compare & Write: Not Supported 00:14:24.481 Scatter-Gather List 00:14:24.481 SGL Command Set: Supported 00:14:24.481 SGL Keyed: Not Supported 00:14:24.481 SGL Bit Bucket Descriptor: Not Supported 00:14:24.481 SGL Metadata Pointer: Not Supported 00:14:24.481 Oversized SGL: Not Supported 00:14:24.481 SGL Metadata Address: Not Supported 00:14:24.481 SGL Offset: Not Supported 00:14:24.481 Transport SGL Data Block: Not Supported 00:14:24.481 Replay Protected Memory Block: Not Supported 00:14:24.481 00:14:24.481 Firmware Slot Information 00:14:24.481 ========================= 00:14:24.481 Active slot: 1 00:14:24.481 Slot 1 Firmware Revision: 1.0 00:14:24.481 00:14:24.481 00:14:24.481 Commands Supported and Effects 00:14:24.481 ============================== 00:14:24.481 Admin Commands 00:14:24.481 -------------- 00:14:24.481 Delete I/O Submission Queue (00h): Supported 00:14:24.481 Create I/O Submission Queue (01h): Supported 00:14:24.481 Get Log Page (02h): Supported 00:14:24.481 Delete I/O Completion Queue (04h): Supported 00:14:24.481 Create I/O Completion Queue (05h): Supported 00:14:24.481 Identify (06h): Supported 00:14:24.481 Abort (08h): Supported 00:14:24.481 Set Features (09h): Supported 00:14:24.481 Get Features (0Ah): Supported 00:14:24.481 Asynchronous Event Request (0Ch): Supported 00:14:24.481 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:24.481 Directive Send (19h): Supported 00:14:24.481 Directive Receive (1Ah): Supported 00:14:24.481 Virtualization Management (1Ch): Supported 00:14:24.481 Doorbell Buffer Config (7Ch): Supported 00:14:24.481 Format NVM (80h): Supported LBA-Change 00:14:24.481 I/O Commands 00:14:24.481 ------------ 00:14:24.481 Flush (00h): Supported LBA-Change 00:14:24.481 Write (01h): Supported LBA-Change 00:14:24.481 Read (02h): Supported 00:14:24.481 Compare (05h): Supported 00:14:24.481 Write Zeroes (08h): Supported LBA-Change 00:14:24.481 Dataset Management (09h): Supported LBA-Change 00:14:24.481 Unknown (0Ch): Supported 00:14:24.481 Unknown (12h): Supported 00:14:24.481 Copy (19h): Supported LBA-Change 00:14:24.481 Unknown (1Dh): Supported LBA-Change 00:14:24.481 00:14:24.481 Error Log 00:14:24.481 ========= 00:14:24.481 00:14:24.481 Arbitration 00:14:24.481 =========== 00:14:24.481 Arbitration Burst: no limit 00:14:24.481 00:14:24.481 Power Management 00:14:24.481 ================ 00:14:24.481 Number of Power States: 1 00:14:24.481 Current Power State: Power State #0 00:14:24.481 Power State #0: 00:14:24.481 Max Power: 25.00 W 00:14:24.481 Non-Operational State: Operational 00:14:24.481 Entry Latency: 16 microseconds 00:14:24.481 Exit Latency: 4 microseconds 00:14:24.481 Relative Read Throughput: 0 00:14:24.481 Relative Read Latency: 0 00:14:24.481 Relative Write Throughput: 0 00:14:24.481 Relative Write Latency: 0 00:14:24.481 Idle Power: Not Reported 00:14:24.481 Active Power: Not Reported 00:14:24.481 Non-Operational Permissive Mode: Not Supported 00:14:24.481 00:14:24.481 Health Information 00:14:24.481 ================== 00:14:24.481 Critical Warnings: 00:14:24.481 Available Spare Space: OK 00:14:24.481 Temperature: OK 00:14:24.481 Device Reliability: OK 00:14:24.481 Read Only: No 00:14:24.481 Volatile Memory Backup: OK 00:14:24.481 Current Temperature: 323 Kelvin (50 Celsius) 00:14:24.481 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:24.481 Available Spare: 0% 00:14:24.481 Available Spare Threshold: 0% 00:14:24.481 Life Percentage Used: 0% 00:14:24.481 Data Units Read: 2440 00:14:24.481 Data Units Written: 2228 00:14:24.481 Host Read Commands: 103725 00:14:24.481 Host Write Commands: 101994 00:14:24.481 Controller Busy Time: 0 minutes 00:14:24.481 Power Cycles: 0 00:14:24.481 Power On Hours: 0 hours 00:14:24.482 Unsafe Shutdowns: 0 00:14:24.482 Unrecoverable Media Errors: 0 00:14:24.482 Lifetime Error Log Entries: 0 00:14:24.482 Warning Temperature Time: 0 minutes 00:14:24.482 Critical Temperature Time: 0 minutes 00:14:24.482 00:14:24.482 Number of Queues 00:14:24.482 ================ 00:14:24.482 Number of I/O Submission Queues: 64 00:14:24.482 Number of I/O Completion Queues: 64 00:14:24.482 00:14:24.482 ZNS Specific Controller Data 00:14:24.482 ============================ 00:14:24.482 Zone Append Size Limit: 0 00:14:24.482 00:14:24.482 00:14:24.482 Active Namespaces 00:14:24.482 ================= 00:14:24.482 Namespace ID:1 00:14:24.482 Error Recovery Timeout: Unlimited 00:14:24.482 Command Set Identifier: NVM (00h) 00:14:24.482 Deallocate: Supported 00:14:24.482 Deallocated/Unwritten Error: Supported 00:14:24.482 Deallocated Read Value: All 0x00 00:14:24.482 Deallocate in Write Zeroes: Not Supported 00:14:24.482 Deallocated Guard Field: 0xFFFF 00:14:24.482 Flush: Supported 00:14:24.482 Reservation: Not Supported 00:14:24.482 Namespace Sharing Capabilities: Private 00:14:24.482 Size (in LBAs): 1048576 (4GiB) 00:14:24.482 Capacity (in LBAs): 1048576 (4GiB) 00:14:24.482 Utilization (in LBAs): 1048576 (4GiB) 00:14:24.482 Thin Provisioning: Not Supported 00:14:24.482 Per-NS Atomic Units: No 00:14:24.482 Maximum Single Source Range Length: 128 00:14:24.482 Maximum Copy Length: 128 00:14:24.482 Maximum Source Range Count: 128 00:14:24.482 NGUID/EUI64 Never Reused: No 00:14:24.482 Namespace Write Protected: No 00:14:24.482 Number of LBA Formats: 8 00:14:24.482 Current LBA Format: LBA Format #04 00:14:24.482 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:24.482 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:24.482 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:24.482 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:24.482 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:24.482 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:24.482 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:24.482 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:24.482 00:14:24.482 NVM Specific Namespace Data 00:14:24.482 =========================== 00:14:24.482 Logical Block Storage Tag Mask: 0 00:14:24.482 Protection Information Capabilities: 00:14:24.482 16b Guard Protection Information Storage Tag Support: No 00:14:24.482 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:24.482 Storage Tag Check Read Support: No 00:14:24.482 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Namespace ID:2 00:14:24.482 Error Recovery Timeout: Unlimited 00:14:24.482 Command Set Identifier: NVM (00h) 00:14:24.482 Deallocate: Supported 00:14:24.482 Deallocated/Unwritten Error: Supported 00:14:24.482 Deallocated Read Value: All 0x00 00:14:24.482 Deallocate in Write Zeroes: Not Supported 00:14:24.482 Deallocated Guard Field: 0xFFFF 00:14:24.482 Flush: Supported 00:14:24.482 Reservation: Not Supported 00:14:24.482 Namespace Sharing Capabilities: Private 00:14:24.482 Size (in LBAs): 1048576 (4GiB) 00:14:24.482 Capacity (in LBAs): 1048576 (4GiB) 00:14:24.482 Utilization (in LBAs): 1048576 (4GiB) 00:14:24.482 Thin Provisioning: Not Supported 00:14:24.482 Per-NS Atomic Units: No 00:14:24.482 Maximum Single Source Range Length: 128 00:14:24.482 Maximum Copy Length: 128 00:14:24.482 Maximum Source Range Count: 128 00:14:24.482 NGUID/EUI64 Never Reused: No 00:14:24.482 Namespace Write Protected: No 00:14:24.482 Number of LBA Formats: 8 00:14:24.482 Current LBA Format: LBA Format #04 00:14:24.482 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:24.482 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:24.482 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:24.482 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:24.482 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:24.482 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:24.482 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:24.482 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:24.482 00:14:24.482 NVM Specific Namespace Data 00:14:24.482 =========================== 00:14:24.482 Logical Block Storage Tag Mask: 0 00:14:24.482 Protection Information Capabilities: 00:14:24.482 16b Guard Protection Information Storage Tag Support: No 00:14:24.482 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:24.482 Storage Tag Check Read Support: No 00:14:24.482 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Namespace ID:3 00:14:24.482 Error Recovery Timeout: Unlimited 00:14:24.482 Command Set Identifier: NVM (00h) 00:14:24.482 Deallocate: Supported 00:14:24.482 Deallocated/Unwritten Error: Supported 00:14:24.482 Deallocated Read Value: All 0x00 00:14:24.482 Deallocate in Write Zeroes: Not Supported 00:14:24.482 Deallocated Guard Field: 0xFFFF 00:14:24.482 Flush: Supported 00:14:24.482 Reservation: Not Supported 00:14:24.482 Namespace Sharing Capabilities: Private 00:14:24.482 Size (in LBAs): 1048576 (4GiB) 00:14:24.482 Capacity (in LBAs): 1048576 (4GiB) 00:14:24.482 Utilization (in LBAs): 1048576 (4GiB) 00:14:24.482 Thin Provisioning: Not Supported 00:14:24.482 Per-NS Atomic Units: No 00:14:24.482 Maximum Single Source Range Length: 128 00:14:24.482 Maximum Copy Length: 128 00:14:24.482 Maximum Source Range Count: 128 00:14:24.482 NGUID/EUI64 Never Reused: No 00:14:24.482 Namespace Write Protected: No 00:14:24.482 Number of LBA Formats: 8 00:14:24.482 Current LBA Format: LBA Format #04 00:14:24.482 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:24.482 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:24.482 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:24.482 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:24.482 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:24.482 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:24.482 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:24.482 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:24.482 00:14:24.482 NVM Specific Namespace Data 00:14:24.482 =========================== 00:14:24.482 Logical Block Storage Tag Mask: 0 00:14:24.482 Protection Information Capabilities: 00:14:24.482 16b Guard Protection Information Storage Tag Support: No 00:14:24.482 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:24.482 Storage Tag Check Read Support: No 00:14:24.482 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:24.482 13:33:36 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:14:24.482 13:33:36 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:14:24.742 ===================================================== 00:14:24.742 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:24.742 ===================================================== 00:14:24.742 Controller Capabilities/Features 00:14:24.742 ================================ 00:14:24.742 Vendor ID: 1b36 00:14:24.742 Subsystem Vendor ID: 1af4 00:14:24.742 Serial Number: 12343 00:14:24.742 Model Number: QEMU NVMe Ctrl 00:14:24.742 Firmware Version: 8.0.0 00:14:24.742 Recommended Arb Burst: 6 00:14:24.742 IEEE OUI Identifier: 00 54 52 00:14:24.742 Multi-path I/O 00:14:24.742 May have multiple subsystem ports: No 00:14:24.742 May have multiple controllers: Yes 00:14:24.742 Associated with SR-IOV VF: No 00:14:24.742 Max Data Transfer Size: 524288 00:14:24.742 Max Number of Namespaces: 256 00:14:24.742 Max Number of I/O Queues: 64 00:14:24.742 NVMe Specification Version (VS): 1.4 00:14:24.742 NVMe Specification Version (Identify): 1.4 00:14:24.742 Maximum Queue Entries: 2048 00:14:24.742 Contiguous Queues Required: Yes 00:14:24.742 Arbitration Mechanisms Supported 00:14:24.742 Weighted Round Robin: Not Supported 00:14:24.742 Vendor Specific: Not Supported 00:14:24.742 Reset Timeout: 7500 ms 00:14:24.742 Doorbell Stride: 4 bytes 00:14:24.742 NVM Subsystem Reset: Not Supported 00:14:24.742 Command Sets Supported 00:14:24.742 NVM Command Set: Supported 00:14:24.742 Boot Partition: Not Supported 00:14:24.742 Memory Page Size Minimum: 4096 bytes 00:14:24.742 Memory Page Size Maximum: 65536 bytes 00:14:24.742 Persistent Memory Region: Not Supported 00:14:24.742 Optional Asynchronous Events Supported 00:14:24.742 Namespace Attribute Notices: Supported 00:14:24.742 Firmware Activation Notices: Not Supported 00:14:24.742 ANA Change Notices: Not Supported 00:14:24.742 PLE Aggregate Log Change Notices: Not Supported 00:14:24.742 LBA Status Info Alert Notices: Not Supported 00:14:24.742 EGE Aggregate Log Change Notices: Not Supported 00:14:24.742 Normal NVM Subsystem Shutdown event: Not Supported 00:14:24.742 Zone Descriptor Change Notices: Not Supported 00:14:24.742 Discovery Log Change Notices: Not Supported 00:14:24.742 Controller Attributes 00:14:24.742 128-bit Host Identifier: Not Supported 00:14:24.742 Non-Operational Permissive Mode: Not Supported 00:14:24.742 NVM Sets: Not Supported 00:14:24.742 Read Recovery Levels: Not Supported 00:14:24.742 Endurance Groups: Supported 00:14:24.742 Predictable Latency Mode: Not Supported 00:14:24.742 Traffic Based Keep ALive: Not Supported 00:14:24.742 Namespace Granularity: Not Supported 00:14:24.742 SQ Associations: Not Supported 00:14:24.742 UUID List: Not Supported 00:14:24.742 Multi-Domain Subsystem: Not Supported 00:14:24.742 Fixed Capacity Management: Not Supported 00:14:24.742 Variable Capacity Management: Not Supported 00:14:24.742 Delete Endurance Group: Not Supported 00:14:24.742 Delete NVM Set: Not Supported 00:14:24.742 Extended LBA Formats Supported: Supported 00:14:24.742 Flexible Data Placement Supported: Supported 00:14:24.742 00:14:24.742 Controller Memory Buffer Support 00:14:24.742 ================================ 00:14:24.742 Supported: No 00:14:24.742 00:14:24.742 Persistent Memory Region Support 00:14:24.742 ================================ 00:14:24.742 Supported: No 00:14:24.742 00:14:24.742 Admin Command Set Attributes 00:14:24.742 ============================ 00:14:24.742 Security Send/Receive: Not Supported 00:14:24.742 Format NVM: Supported 00:14:24.742 Firmware Activate/Download: Not Supported 00:14:24.742 Namespace Management: Supported 00:14:24.742 Device Self-Test: Not Supported 00:14:24.742 Directives: Supported 00:14:24.742 NVMe-MI: Not Supported 00:14:24.742 Virtualization Management: Not Supported 00:14:24.742 Doorbell Buffer Config: Supported 00:14:24.742 Get LBA Status Capability: Not Supported 00:14:24.742 Command & Feature Lockdown Capability: Not Supported 00:14:24.742 Abort Command Limit: 4 00:14:24.742 Async Event Request Limit: 4 00:14:24.742 Number of Firmware Slots: N/A 00:14:24.742 Firmware Slot 1 Read-Only: N/A 00:14:24.742 Firmware Activation Without Reset: N/A 00:14:24.742 Multiple Update Detection Support: N/A 00:14:24.742 Firmware Update Granularity: No Information Provided 00:14:24.742 Per-Namespace SMART Log: Yes 00:14:24.742 Asymmetric Namespace Access Log Page: Not Supported 00:14:24.742 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:14:24.742 Command Effects Log Page: Supported 00:14:24.742 Get Log Page Extended Data: Supported 00:14:24.742 Telemetry Log Pages: Not Supported 00:14:24.742 Persistent Event Log Pages: Not Supported 00:14:24.742 Supported Log Pages Log Page: May Support 00:14:24.742 Commands Supported & Effects Log Page: Not Supported 00:14:24.742 Feature Identifiers & Effects Log Page:May Support 00:14:24.742 NVMe-MI Commands & Effects Log Page: May Support 00:14:24.742 Data Area 4 for Telemetry Log: Not Supported 00:14:24.742 Error Log Page Entries Supported: 1 00:14:24.742 Keep Alive: Not Supported 00:14:24.742 00:14:24.742 NVM Command Set Attributes 00:14:24.742 ========================== 00:14:24.742 Submission Queue Entry Size 00:14:24.742 Max: 64 00:14:24.742 Min: 64 00:14:24.742 Completion Queue Entry Size 00:14:24.742 Max: 16 00:14:24.742 Min: 16 00:14:24.742 Number of Namespaces: 256 00:14:24.742 Compare Command: Supported 00:14:24.742 Write Uncorrectable Command: Not Supported 00:14:24.742 Dataset Management Command: Supported 00:14:24.742 Write Zeroes Command: Supported 00:14:24.742 Set Features Save Field: Supported 00:14:24.742 Reservations: Not Supported 00:14:24.742 Timestamp: Supported 00:14:24.742 Copy: Supported 00:14:24.742 Volatile Write Cache: Present 00:14:24.742 Atomic Write Unit (Normal): 1 00:14:24.742 Atomic Write Unit (PFail): 1 00:14:24.742 Atomic Compare & Write Unit: 1 00:14:24.742 Fused Compare & Write: Not Supported 00:14:24.742 Scatter-Gather List 00:14:24.742 SGL Command Set: Supported 00:14:24.742 SGL Keyed: Not Supported 00:14:24.742 SGL Bit Bucket Descriptor: Not Supported 00:14:24.742 SGL Metadata Pointer: Not Supported 00:14:24.743 Oversized SGL: Not Supported 00:14:24.743 SGL Metadata Address: Not Supported 00:14:24.743 SGL Offset: Not Supported 00:14:24.743 Transport SGL Data Block: Not Supported 00:14:24.743 Replay Protected Memory Block: Not Supported 00:14:24.743 00:14:24.743 Firmware Slot Information 00:14:24.743 ========================= 00:14:24.743 Active slot: 1 00:14:24.743 Slot 1 Firmware Revision: 1.0 00:14:24.743 00:14:24.743 00:14:24.743 Commands Supported and Effects 00:14:24.743 ============================== 00:14:24.743 Admin Commands 00:14:24.743 -------------- 00:14:24.743 Delete I/O Submission Queue (00h): Supported 00:14:24.743 Create I/O Submission Queue (01h): Supported 00:14:24.743 Get Log Page (02h): Supported 00:14:24.743 Delete I/O Completion Queue (04h): Supported 00:14:24.743 Create I/O Completion Queue (05h): Supported 00:14:24.743 Identify (06h): Supported 00:14:24.743 Abort (08h): Supported 00:14:24.743 Set Features (09h): Supported 00:14:24.743 Get Features (0Ah): Supported 00:14:24.743 Asynchronous Event Request (0Ch): Supported 00:14:24.743 Namespace Attachment (15h): Supported NS-Inventory-Change 00:14:24.743 Directive Send (19h): Supported 00:14:24.743 Directive Receive (1Ah): Supported 00:14:24.743 Virtualization Management (1Ch): Supported 00:14:24.743 Doorbell Buffer Config (7Ch): Supported 00:14:24.743 Format NVM (80h): Supported LBA-Change 00:14:24.743 I/O Commands 00:14:24.743 ------------ 00:14:24.743 Flush (00h): Supported LBA-Change 00:14:24.743 Write (01h): Supported LBA-Change 00:14:24.743 Read (02h): Supported 00:14:24.743 Compare (05h): Supported 00:14:24.743 Write Zeroes (08h): Supported LBA-Change 00:14:24.743 Dataset Management (09h): Supported LBA-Change 00:14:24.743 Unknown (0Ch): Supported 00:14:24.743 Unknown (12h): Supported 00:14:24.743 Copy (19h): Supported LBA-Change 00:14:24.743 Unknown (1Dh): Supported LBA-Change 00:14:24.743 00:14:24.743 Error Log 00:14:24.743 ========= 00:14:24.743 00:14:24.743 Arbitration 00:14:24.743 =========== 00:14:24.743 Arbitration Burst: no limit 00:14:24.743 00:14:24.743 Power Management 00:14:24.743 ================ 00:14:24.743 Number of Power States: 1 00:14:24.743 Current Power State: Power State #0 00:14:24.743 Power State #0: 00:14:24.743 Max Power: 25.00 W 00:14:24.743 Non-Operational State: Operational 00:14:24.743 Entry Latency: 16 microseconds 00:14:24.743 Exit Latency: 4 microseconds 00:14:24.743 Relative Read Throughput: 0 00:14:24.743 Relative Read Latency: 0 00:14:24.743 Relative Write Throughput: 0 00:14:24.743 Relative Write Latency: 0 00:14:24.743 Idle Power: Not Reported 00:14:24.743 Active Power: Not Reported 00:14:24.743 Non-Operational Permissive Mode: Not Supported 00:14:24.743 00:14:24.743 Health Information 00:14:24.743 ================== 00:14:24.743 Critical Warnings: 00:14:24.743 Available Spare Space: OK 00:14:24.743 Temperature: OK 00:14:24.743 Device Reliability: OK 00:14:24.743 Read Only: No 00:14:24.743 Volatile Memory Backup: OK 00:14:24.743 Current Temperature: 323 Kelvin (50 Celsius) 00:14:24.743 Temperature Threshold: 343 Kelvin (70 Celsius) 00:14:24.743 Available Spare: 0% 00:14:24.743 Available Spare Threshold: 0% 00:14:24.743 Life Percentage Used: 0% 00:14:24.743 Data Units Read: 896 00:14:24.743 Data Units Written: 825 00:14:24.743 Host Read Commands: 35276 00:14:24.743 Host Write Commands: 34699 00:14:24.743 Controller Busy Time: 0 minutes 00:14:24.743 Power Cycles: 0 00:14:24.743 Power On Hours: 0 hours 00:14:24.743 Unsafe Shutdowns: 0 00:14:24.743 Unrecoverable Media Errors: 0 00:14:24.743 Lifetime Error Log Entries: 0 00:14:24.743 Warning Temperature Time: 0 minutes 00:14:24.743 Critical Temperature Time: 0 minutes 00:14:24.743 00:14:24.743 Number of Queues 00:14:24.743 ================ 00:14:24.743 Number of I/O Submission Queues: 64 00:14:24.743 Number of I/O Completion Queues: 64 00:14:24.743 00:14:24.743 ZNS Specific Controller Data 00:14:24.743 ============================ 00:14:24.743 Zone Append Size Limit: 0 00:14:24.743 00:14:24.743 00:14:24.743 Active Namespaces 00:14:24.743 ================= 00:14:24.743 Namespace ID:1 00:14:24.743 Error Recovery Timeout: Unlimited 00:14:24.743 Command Set Identifier: NVM (00h) 00:14:24.743 Deallocate: Supported 00:14:24.743 Deallocated/Unwritten Error: Supported 00:14:24.743 Deallocated Read Value: All 0x00 00:14:24.743 Deallocate in Write Zeroes: Not Supported 00:14:24.743 Deallocated Guard Field: 0xFFFF 00:14:24.743 Flush: Supported 00:14:24.743 Reservation: Not Supported 00:14:24.743 Namespace Sharing Capabilities: Multiple Controllers 00:14:24.743 Size (in LBAs): 262144 (1GiB) 00:14:24.743 Capacity (in LBAs): 262144 (1GiB) 00:14:24.743 Utilization (in LBAs): 262144 (1GiB) 00:14:24.743 Thin Provisioning: Not Supported 00:14:24.743 Per-NS Atomic Units: No 00:14:24.743 Maximum Single Source Range Length: 128 00:14:24.743 Maximum Copy Length: 128 00:14:24.743 Maximum Source Range Count: 128 00:14:24.743 NGUID/EUI64 Never Reused: No 00:14:24.743 Namespace Write Protected: No 00:14:24.743 Endurance group ID: 1 00:14:24.743 Number of LBA Formats: 8 00:14:24.743 Current LBA Format: LBA Format #04 00:14:24.743 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:24.743 LBA Format #01: Data Size: 512 Metadata Size: 8 00:14:24.743 LBA Format #02: Data Size: 512 Metadata Size: 16 00:14:24.743 LBA Format #03: Data Size: 512 Metadata Size: 64 00:14:24.743 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:14:24.743 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:14:24.743 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:14:24.743 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:14:24.743 00:14:24.743 Get Feature FDP: 00:14:24.743 ================ 00:14:24.743 Enabled: Yes 00:14:24.743 FDP configuration index: 0 00:14:24.743 00:14:24.743 FDP configurations log page 00:14:24.743 =========================== 00:14:24.743 Number of FDP configurations: 1 00:14:24.743 Version: 0 00:14:24.743 Size: 112 00:14:24.743 FDP Configuration Descriptor: 0 00:14:24.743 Descriptor Size: 96 00:14:24.743 Reclaim Group Identifier format: 2 00:14:24.743 FDP Volatile Write Cache: Not Present 00:14:24.743 FDP Configuration: Valid 00:14:24.743 Vendor Specific Size: 0 00:14:24.743 Number of Reclaim Groups: 2 00:14:24.743 Number of Recalim Unit Handles: 8 00:14:24.743 Max Placement Identifiers: 128 00:14:24.743 Number of Namespaces Suppprted: 256 00:14:24.743 Reclaim unit Nominal Size: 6000000 bytes 00:14:24.743 Estimated Reclaim Unit Time Limit: Not Reported 00:14:24.743 RUH Desc #000: RUH Type: Initially Isolated 00:14:24.743 RUH Desc #001: RUH Type: Initially Isolated 00:14:24.743 RUH Desc #002: RUH Type: Initially Isolated 00:14:24.743 RUH Desc #003: RUH Type: Initially Isolated 00:14:24.743 RUH Desc #004: RUH Type: Initially Isolated 00:14:24.743 RUH Desc #005: RUH Type: Initially Isolated 00:14:24.743 RUH Desc #006: RUH Type: Initially Isolated 00:14:24.743 RUH Desc #007: RUH Type: Initially Isolated 00:14:24.743 00:14:24.743 FDP reclaim unit handle usage log page 00:14:25.002 ====================================== 00:14:25.002 Number of Reclaim Unit Handles: 8 00:14:25.002 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:14:25.002 RUH Usage Desc #001: RUH Attributes: Unused 00:14:25.002 RUH Usage Desc #002: RUH Attributes: Unused 00:14:25.002 RUH Usage Desc #003: RUH Attributes: Unused 00:14:25.002 RUH Usage Desc #004: RUH Attributes: Unused 00:14:25.002 RUH Usage Desc #005: RUH Attributes: Unused 00:14:25.002 RUH Usage Desc #006: RUH Attributes: Unused 00:14:25.002 RUH Usage Desc #007: RUH Attributes: Unused 00:14:25.002 00:14:25.002 FDP statistics log page 00:14:25.002 ======================= 00:14:25.002 Host bytes with metadata written: 530620416 00:14:25.002 Media bytes with metadata written: 530677760 00:14:25.002 Media bytes erased: 0 00:14:25.002 00:14:25.002 FDP events log page 00:14:25.002 =================== 00:14:25.002 Number of FDP events: 0 00:14:25.002 00:14:25.002 NVM Specific Namespace Data 00:14:25.002 =========================== 00:14:25.002 Logical Block Storage Tag Mask: 0 00:14:25.002 Protection Information Capabilities: 00:14:25.002 16b Guard Protection Information Storage Tag Support: No 00:14:25.002 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:14:25.002 Storage Tag Check Read Support: No 00:14:25.002 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:25.002 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:25.002 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:25.002 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:25.002 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:25.002 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:25.002 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:25.002 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:14:25.002 00:14:25.002 real 0m1.776s 00:14:25.002 user 0m0.660s 00:14:25.002 sys 0m0.895s 00:14:25.002 13:33:36 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.002 13:33:36 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:14:25.002 ************************************ 00:14:25.002 END TEST nvme_identify 00:14:25.002 ************************************ 00:14:25.002 13:33:36 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:14:25.002 13:33:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:25.002 13:33:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.002 13:33:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:25.002 ************************************ 00:14:25.002 START TEST nvme_perf 00:14:25.002 ************************************ 00:14:25.002 13:33:36 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:14:25.002 13:33:36 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:14:26.381 Initializing NVMe Controllers 00:14:26.381 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:26.381 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:26.381 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:26.381 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:26.381 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:26.381 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:26.381 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:26.381 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:26.381 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:26.381 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:26.381 Initialization complete. Launching workers. 00:14:26.381 ======================================================== 00:14:26.381 Latency(us) 00:14:26.381 Device Information : IOPS MiB/s Average min max 00:14:26.381 PCIE (0000:00:10.0) NSID 1 from core 0: 13864.53 162.47 9250.81 7648.83 43015.21 00:14:26.381 PCIE (0000:00:11.0) NSID 1 from core 0: 13864.53 162.47 9235.80 7649.10 40780.26 00:14:26.381 PCIE (0000:00:13.0) NSID 1 from core 0: 13864.53 162.47 9219.23 7658.23 39183.67 00:14:26.381 PCIE (0000:00:12.0) NSID 1 from core 0: 13864.53 162.47 9202.57 7677.79 36988.46 00:14:26.381 PCIE (0000:00:12.0) NSID 2 from core 0: 13864.53 162.47 9185.82 7696.80 34857.37 00:14:26.381 PCIE (0000:00:12.0) NSID 3 from core 0: 13928.42 163.22 9127.31 7669.81 28098.55 00:14:26.381 ======================================================== 00:14:26.381 Total : 83251.06 975.60 9203.53 7648.83 43015.21 00:14:26.381 00:14:26.381 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:26.381 ================================================================================= 00:14:26.381 1.00000% : 7895.904us 00:14:26.381 10.00000% : 8211.740us 00:14:26.381 25.00000% : 8474.937us 00:14:26.381 50.00000% : 8790.773us 00:14:26.381 75.00000% : 9211.888us 00:14:26.381 90.00000% : 9843.560us 00:14:26.381 95.00000% : 10738.429us 00:14:26.381 98.00000% : 13107.200us 00:14:26.381 99.00000% : 18739.611us 00:14:26.381 99.50000% : 36215.878us 00:14:26.381 99.90000% : 42743.158us 00:14:26.381 99.99000% : 42953.716us 00:14:26.381 99.99900% : 43164.273us 00:14:26.381 99.99990% : 43164.273us 00:14:26.381 99.99999% : 43164.273us 00:14:26.381 00:14:26.381 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:26.381 ================================================================================= 00:14:26.381 1.00000% : 7948.543us 00:14:26.381 10.00000% : 8264.379us 00:14:26.381 25.00000% : 8527.576us 00:14:26.381 50.00000% : 8790.773us 00:14:26.381 75.00000% : 9211.888us 00:14:26.381 90.00000% : 9790.920us 00:14:26.381 95.00000% : 10738.429us 00:14:26.381 98.00000% : 13686.233us 00:14:26.381 99.00000% : 18950.169us 00:14:26.381 99.50000% : 34320.861us 00:14:26.381 99.90000% : 40427.027us 00:14:26.381 99.99000% : 40848.141us 00:14:26.381 99.99900% : 40848.141us 00:14:26.381 99.99990% : 40848.141us 00:14:26.381 99.99999% : 40848.141us 00:14:26.381 00:14:26.381 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:26.381 ================================================================================= 00:14:26.381 1.00000% : 7948.543us 00:14:26.381 10.00000% : 8264.379us 00:14:26.381 25.00000% : 8527.576us 00:14:26.381 50.00000% : 8790.773us 00:14:26.381 75.00000% : 9159.248us 00:14:26.381 90.00000% : 9790.920us 00:14:26.381 95.00000% : 10791.068us 00:14:26.381 98.00000% : 13686.233us 00:14:26.381 99.00000% : 18634.333us 00:14:26.381 99.50000% : 32846.959us 00:14:26.382 99.90000% : 38953.124us 00:14:26.382 99.99000% : 39163.682us 00:14:26.382 99.99900% : 39374.239us 00:14:26.382 99.99990% : 39374.239us 00:14:26.382 99.99999% : 39374.239us 00:14:26.382 00:14:26.382 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:26.382 ================================================================================= 00:14:26.382 1.00000% : 7948.543us 00:14:26.382 10.00000% : 8264.379us 00:14:26.382 25.00000% : 8474.937us 00:14:26.382 50.00000% : 8790.773us 00:14:26.382 75.00000% : 9159.248us 00:14:26.382 90.00000% : 9843.560us 00:14:26.382 95.00000% : 10843.708us 00:14:26.382 98.00000% : 14212.627us 00:14:26.382 99.00000% : 18213.218us 00:14:26.382 99.50000% : 30741.385us 00:14:26.382 99.90000% : 36636.993us 00:14:26.382 99.99000% : 37058.108us 00:14:26.382 99.99900% : 37058.108us 00:14:26.382 99.99990% : 37058.108us 00:14:26.382 99.99999% : 37058.108us 00:14:26.382 00:14:26.382 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:26.382 ================================================================================= 00:14:26.382 1.00000% : 7948.543us 00:14:26.382 10.00000% : 8264.379us 00:14:26.382 25.00000% : 8474.937us 00:14:26.382 50.00000% : 8790.773us 00:14:26.382 75.00000% : 9211.888us 00:14:26.382 90.00000% : 9843.560us 00:14:26.382 95.00000% : 10843.708us 00:14:26.382 98.00000% : 14107.348us 00:14:26.382 99.00000% : 18107.939us 00:14:26.382 99.50000% : 28635.810us 00:14:26.382 99.90000% : 34531.418us 00:14:26.382 99.99000% : 34952.533us 00:14:26.382 99.99900% : 34952.533us 00:14:26.382 99.99990% : 34952.533us 00:14:26.382 99.99999% : 34952.533us 00:14:26.382 00:14:26.382 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:26.382 ================================================================================= 00:14:26.382 1.00000% : 7948.543us 00:14:26.382 10.00000% : 8264.379us 00:14:26.382 25.00000% : 8527.576us 00:14:26.382 50.00000% : 8790.773us 00:14:26.382 75.00000% : 9211.888us 00:14:26.382 90.00000% : 9843.560us 00:14:26.382 95.00000% : 10843.708us 00:14:26.382 98.00000% : 12686.085us 00:14:26.382 99.00000% : 18213.218us 00:14:26.382 99.50000% : 21687.415us 00:14:26.382 99.90000% : 27793.581us 00:14:26.382 99.99000% : 28214.696us 00:14:26.382 99.99900% : 28214.696us 00:14:26.382 99.99990% : 28214.696us 00:14:26.382 99.99999% : 28214.696us 00:14:26.382 00:14:26.382 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:26.382 ============================================================================== 00:14:26.382 Range in us Cumulative IO count 00:14:26.382 7632.707 - 7685.346: 0.0648% ( 9) 00:14:26.382 7685.346 - 7737.986: 0.2448% ( 25) 00:14:26.382 7737.986 - 7790.625: 0.5256% ( 39) 00:14:26.382 7790.625 - 7843.264: 0.8641% ( 47) 00:14:26.382 7843.264 - 7895.904: 1.5121% ( 90) 00:14:26.382 7895.904 - 7948.543: 2.2465% ( 102) 00:14:26.382 7948.543 - 8001.182: 3.3626% ( 155) 00:14:26.382 8001.182 - 8053.822: 4.7307% ( 190) 00:14:26.382 8053.822 - 8106.461: 6.5884% ( 258) 00:14:26.382 8106.461 - 8159.100: 8.6694% ( 289) 00:14:26.382 8159.100 - 8211.740: 11.2039% ( 352) 00:14:26.382 8211.740 - 8264.379: 13.7673% ( 356) 00:14:26.382 8264.379 - 8317.018: 16.4603% ( 374) 00:14:26.382 8317.018 - 8369.658: 19.5781% ( 433) 00:14:26.382 8369.658 - 8422.297: 23.0127% ( 477) 00:14:26.382 8422.297 - 8474.937: 26.6057% ( 499) 00:14:26.382 8474.937 - 8527.576: 30.3067% ( 514) 00:14:26.382 8527.576 - 8580.215: 34.2886% ( 553) 00:14:26.382 8580.215 - 8632.855: 38.1840% ( 541) 00:14:26.382 8632.855 - 8685.494: 42.2451% ( 564) 00:14:26.382 8685.494 - 8738.133: 46.2054% ( 550) 00:14:26.382 8738.133 - 8790.773: 50.1656% ( 550) 00:14:26.382 8790.773 - 8843.412: 54.0971% ( 546) 00:14:26.382 8843.412 - 8896.051: 57.9277% ( 532) 00:14:26.382 8896.051 - 8948.691: 61.4343% ( 487) 00:14:26.382 8948.691 - 9001.330: 64.7969% ( 467) 00:14:26.382 9001.330 - 9053.969: 67.9363% ( 436) 00:14:26.382 9053.969 - 9106.609: 70.8165% ( 400) 00:14:26.382 9106.609 - 9159.248: 73.4591% ( 367) 00:14:26.382 9159.248 - 9211.888: 75.6768% ( 308) 00:14:26.382 9211.888 - 9264.527: 77.8010% ( 295) 00:14:26.382 9264.527 - 9317.166: 79.8531% ( 285) 00:14:26.382 9317.166 - 9369.806: 81.6604% ( 251) 00:14:26.382 9369.806 - 9422.445: 83.1509% ( 207) 00:14:26.382 9422.445 - 9475.084: 84.4398% ( 179) 00:14:26.382 9475.084 - 9527.724: 85.5415% ( 153) 00:14:26.382 9527.724 - 9580.363: 86.5567% ( 141) 00:14:26.382 9580.363 - 9633.002: 87.4280% ( 121) 00:14:26.382 9633.002 - 9685.642: 88.2560% ( 115) 00:14:26.382 9685.642 - 9738.281: 88.9545% ( 97) 00:14:26.382 9738.281 - 9790.920: 89.5161% ( 78) 00:14:26.382 9790.920 - 9843.560: 90.0418% ( 73) 00:14:26.382 9843.560 - 9896.199: 90.5458% ( 70) 00:14:26.382 9896.199 - 9948.839: 91.0138% ( 65) 00:14:26.382 9948.839 - 10001.478: 91.3738% ( 50) 00:14:26.382 10001.478 - 10054.117: 91.6907% ( 44) 00:14:26.382 10054.117 - 10106.757: 92.0291% ( 47) 00:14:26.382 10106.757 - 10159.396: 92.2883% ( 36) 00:14:26.382 10159.396 - 10212.035: 92.5475% ( 36) 00:14:26.382 10212.035 - 10264.675: 92.8355% ( 40) 00:14:26.382 10264.675 - 10317.314: 93.0948% ( 36) 00:14:26.382 10317.314 - 10369.953: 93.3468% ( 35) 00:14:26.382 10369.953 - 10422.593: 93.5988% ( 35) 00:14:26.382 10422.593 - 10475.232: 93.8436% ( 34) 00:14:26.382 10475.232 - 10527.871: 94.1028% ( 36) 00:14:26.382 10527.871 - 10580.511: 94.3404% ( 33) 00:14:26.382 10580.511 - 10633.150: 94.6357% ( 41) 00:14:26.382 10633.150 - 10685.790: 94.8733% ( 33) 00:14:26.382 10685.790 - 10738.429: 95.1181% ( 34) 00:14:26.382 10738.429 - 10791.068: 95.3629% ( 34) 00:14:26.382 10791.068 - 10843.708: 95.6077% ( 34) 00:14:26.382 10843.708 - 10896.347: 95.8381% ( 32) 00:14:26.382 10896.347 - 10948.986: 96.0109% ( 24) 00:14:26.382 10948.986 - 11001.626: 96.1982% ( 26) 00:14:26.382 11001.626 - 11054.265: 96.3638% ( 23) 00:14:26.382 11054.265 - 11106.904: 96.5438% ( 25) 00:14:26.382 11106.904 - 11159.544: 96.7166% ( 24) 00:14:26.382 11159.544 - 11212.183: 96.8678% ( 21) 00:14:26.382 11212.183 - 11264.822: 97.0046% ( 19) 00:14:26.382 11264.822 - 11317.462: 97.1486% ( 20) 00:14:26.382 11317.462 - 11370.101: 97.3142% ( 23) 00:14:26.382 11370.101 - 11422.741: 97.4150% ( 14) 00:14:26.382 11422.741 - 11475.380: 97.5086% ( 13) 00:14:26.382 11475.380 - 11528.019: 97.5662% ( 8) 00:14:26.382 11528.019 - 11580.659: 97.5806% ( 2) 00:14:26.382 11580.659 - 11633.298: 97.5950% ( 2) 00:14:26.382 11633.298 - 11685.937: 97.6094% ( 2) 00:14:26.382 11685.937 - 11738.577: 97.6310% ( 3) 00:14:26.382 11738.577 - 11791.216: 97.6454% ( 2) 00:14:26.382 11791.216 - 11843.855: 97.6671% ( 3) 00:14:26.382 11843.855 - 11896.495: 97.6959% ( 4) 00:14:26.382 11896.495 - 11949.134: 97.7103% ( 2) 00:14:26.382 11949.134 - 12001.773: 97.7463% ( 5) 00:14:26.382 12001.773 - 12054.413: 97.7535% ( 1) 00:14:26.382 12054.413 - 12107.052: 97.7679% ( 2) 00:14:26.382 12107.052 - 12159.692: 97.7751% ( 1) 00:14:26.382 12159.692 - 12212.331: 97.7967% ( 3) 00:14:26.382 12212.331 - 12264.970: 97.8039% ( 1) 00:14:26.382 12264.970 - 12317.610: 97.8183% ( 2) 00:14:26.382 12317.610 - 12370.249: 97.8327% ( 2) 00:14:26.382 12370.249 - 12422.888: 97.8399% ( 1) 00:14:26.383 12422.888 - 12475.528: 97.8543% ( 2) 00:14:26.383 12475.528 - 12528.167: 97.8687% ( 2) 00:14:26.383 12528.167 - 12580.806: 97.8759% ( 1) 00:14:26.383 12580.806 - 12633.446: 97.8975% ( 3) 00:14:26.383 12633.446 - 12686.085: 97.9047% ( 1) 00:14:26.383 12686.085 - 12738.724: 97.9191% ( 2) 00:14:26.383 12738.724 - 12791.364: 97.9263% ( 1) 00:14:26.383 12791.364 - 12844.003: 97.9407% ( 2) 00:14:26.383 12844.003 - 12896.643: 97.9551% ( 2) 00:14:26.383 12896.643 - 12949.282: 97.9695% ( 2) 00:14:26.383 12949.282 - 13001.921: 97.9767% ( 1) 00:14:26.383 13001.921 - 13054.561: 97.9911% ( 2) 00:14:26.383 13054.561 - 13107.200: 98.0055% ( 2) 00:14:26.383 13107.200 - 13159.839: 98.0199% ( 2) 00:14:26.383 13159.839 - 13212.479: 98.0343% ( 2) 00:14:26.383 13212.479 - 13265.118: 98.0415% ( 1) 00:14:26.383 13265.118 - 13317.757: 98.0559% ( 2) 00:14:26.383 13317.757 - 13370.397: 98.0631% ( 1) 00:14:26.383 13370.397 - 13423.036: 98.0775% ( 2) 00:14:26.383 13423.036 - 13475.676: 98.0919% ( 2) 00:14:26.383 13475.676 - 13580.954: 98.1135% ( 3) 00:14:26.383 13580.954 - 13686.233: 98.1351% ( 3) 00:14:26.383 13686.233 - 13791.512: 98.1567% ( 3) 00:14:26.383 15054.856 - 15160.135: 98.1855% ( 4) 00:14:26.383 15160.135 - 15265.414: 98.1999% ( 2) 00:14:26.383 15265.414 - 15370.692: 98.2287% ( 4) 00:14:26.383 15370.692 - 15475.971: 98.2503% ( 3) 00:14:26.383 15475.971 - 15581.250: 98.2575% ( 1) 00:14:26.383 15581.250 - 15686.529: 98.2791% ( 3) 00:14:26.383 15686.529 - 15791.807: 98.3079% ( 4) 00:14:26.383 15791.807 - 15897.086: 98.3223% ( 2) 00:14:26.383 15897.086 - 16002.365: 98.3367% ( 2) 00:14:26.383 16002.365 - 16107.643: 98.3583% ( 3) 00:14:26.383 16107.643 - 16212.922: 98.3799% ( 3) 00:14:26.383 16212.922 - 16318.201: 98.3871% ( 1) 00:14:26.383 16318.201 - 16423.480: 98.4159% ( 4) 00:14:26.383 16423.480 - 16528.758: 98.4375% ( 3) 00:14:26.383 16528.758 - 16634.037: 98.4591% ( 3) 00:14:26.383 16634.037 - 16739.316: 98.4807% ( 3) 00:14:26.383 16739.316 - 16844.594: 98.4951% ( 2) 00:14:26.383 16844.594 - 16949.873: 98.5239% ( 4) 00:14:26.383 16949.873 - 17055.152: 98.5383% ( 2) 00:14:26.383 17055.152 - 17160.431: 98.5599% ( 3) 00:14:26.383 17160.431 - 17265.709: 98.5815% ( 3) 00:14:26.383 17265.709 - 17370.988: 98.6175% ( 5) 00:14:26.383 17370.988 - 17476.267: 98.6535% ( 5) 00:14:26.383 17476.267 - 17581.545: 98.6895% ( 5) 00:14:26.383 17581.545 - 17686.824: 98.7111% ( 3) 00:14:26.383 17686.824 - 17792.103: 98.7471% ( 5) 00:14:26.383 17792.103 - 17897.382: 98.7759% ( 4) 00:14:26.383 17897.382 - 18002.660: 98.8047% ( 4) 00:14:26.383 18002.660 - 18107.939: 98.8407% ( 5) 00:14:26.383 18107.939 - 18213.218: 98.8767% ( 5) 00:14:26.383 18213.218 - 18318.496: 98.9055% ( 4) 00:14:26.383 18318.496 - 18423.775: 98.9271% ( 3) 00:14:26.383 18423.775 - 18529.054: 98.9703% ( 6) 00:14:26.383 18529.054 - 18634.333: 98.9991% ( 4) 00:14:26.383 18634.333 - 18739.611: 99.0351% ( 5) 00:14:26.383 18739.611 - 18844.890: 99.0639% ( 4) 00:14:26.383 18844.890 - 18950.169: 99.0783% ( 2) 00:14:26.383 34320.861 - 34531.418: 99.1143% ( 5) 00:14:26.383 34531.418 - 34741.976: 99.1575% ( 6) 00:14:26.383 34741.976 - 34952.533: 99.2151% ( 8) 00:14:26.383 34952.533 - 35163.091: 99.2584% ( 6) 00:14:26.383 35163.091 - 35373.648: 99.3088% ( 7) 00:14:26.383 35373.648 - 35584.206: 99.3664% ( 8) 00:14:26.383 35584.206 - 35794.763: 99.4096% ( 6) 00:14:26.383 35794.763 - 36005.320: 99.4600% ( 7) 00:14:26.383 36005.320 - 36215.878: 99.5176% ( 8) 00:14:26.383 36215.878 - 36426.435: 99.5392% ( 3) 00:14:26.383 40848.141 - 41058.699: 99.5464% ( 1) 00:14:26.383 41058.699 - 41269.256: 99.6040% ( 8) 00:14:26.383 41269.256 - 41479.814: 99.6472% ( 6) 00:14:26.383 41479.814 - 41690.371: 99.6904% ( 6) 00:14:26.383 41690.371 - 41900.929: 99.7408% ( 7) 00:14:26.383 41900.929 - 42111.486: 99.7912% ( 7) 00:14:26.383 42111.486 - 42322.043: 99.8416% ( 7) 00:14:26.383 42322.043 - 42532.601: 99.8992% ( 8) 00:14:26.383 42532.601 - 42743.158: 99.9496% ( 7) 00:14:26.383 42743.158 - 42953.716: 99.9928% ( 6) 00:14:26.383 42953.716 - 43164.273: 100.0000% ( 1) 00:14:26.383 00:14:26.383 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:26.383 ============================================================================== 00:14:26.383 Range in us Cumulative IO count 00:14:26.383 7632.707 - 7685.346: 0.0216% ( 3) 00:14:26.383 7685.346 - 7737.986: 0.0504% ( 4) 00:14:26.383 7737.986 - 7790.625: 0.1584% ( 15) 00:14:26.383 7790.625 - 7843.264: 0.3960% ( 33) 00:14:26.383 7843.264 - 7895.904: 0.6984% ( 42) 00:14:26.383 7895.904 - 7948.543: 1.1521% ( 63) 00:14:26.383 7948.543 - 8001.182: 1.9081% ( 105) 00:14:26.383 8001.182 - 8053.822: 2.9162% ( 140) 00:14:26.383 8053.822 - 8106.461: 4.4283% ( 210) 00:14:26.383 8106.461 - 8159.100: 6.3436% ( 266) 00:14:26.383 8159.100 - 8211.740: 8.6694% ( 323) 00:14:26.383 8211.740 - 8264.379: 11.3767% ( 376) 00:14:26.383 8264.379 - 8317.018: 14.2569% ( 400) 00:14:26.383 8317.018 - 8369.658: 17.4107% ( 438) 00:14:26.383 8369.658 - 8422.297: 20.8669% ( 480) 00:14:26.383 8422.297 - 8474.937: 24.5032% ( 505) 00:14:26.383 8474.937 - 8527.576: 28.6650% ( 578) 00:14:26.383 8527.576 - 8580.215: 32.7693% ( 570) 00:14:26.383 8580.215 - 8632.855: 37.2192% ( 618) 00:14:26.383 8632.855 - 8685.494: 41.8131% ( 638) 00:14:26.383 8685.494 - 8738.133: 46.2774% ( 620) 00:14:26.383 8738.133 - 8790.773: 50.5976% ( 600) 00:14:26.383 8790.773 - 8843.412: 54.7595% ( 578) 00:14:26.383 8843.412 - 8896.051: 58.8494% ( 568) 00:14:26.383 8896.051 - 8948.691: 62.6584% ( 529) 00:14:26.383 8948.691 - 9001.330: 66.1146% ( 480) 00:14:26.383 9001.330 - 9053.969: 69.2684% ( 438) 00:14:26.383 9053.969 - 9106.609: 72.1990% ( 407) 00:14:26.383 9106.609 - 9159.248: 74.7696% ( 357) 00:14:26.383 9159.248 - 9211.888: 77.1601% ( 332) 00:14:26.383 9211.888 - 9264.527: 79.4427% ( 317) 00:14:26.383 9264.527 - 9317.166: 81.4012% ( 272) 00:14:26.383 9317.166 - 9369.806: 83.0357% ( 227) 00:14:26.383 9369.806 - 9422.445: 84.4758% ( 200) 00:14:26.383 9422.445 - 9475.084: 85.5991% ( 156) 00:14:26.383 9475.084 - 9527.724: 86.5711% ( 135) 00:14:26.383 9527.724 - 9580.363: 87.4496% ( 122) 00:14:26.383 9580.363 - 9633.002: 88.2488% ( 111) 00:14:26.383 9633.002 - 9685.642: 88.9905% ( 103) 00:14:26.383 9685.642 - 9738.281: 89.5521% ( 78) 00:14:26.383 9738.281 - 9790.920: 90.0418% ( 68) 00:14:26.383 9790.920 - 9843.560: 90.4594% ( 58) 00:14:26.383 9843.560 - 9896.199: 90.8122% ( 49) 00:14:26.383 9896.199 - 9948.839: 91.1434% ( 46) 00:14:26.383 9948.839 - 10001.478: 91.4675% ( 45) 00:14:26.383 10001.478 - 10054.117: 91.7843% ( 44) 00:14:26.383 10054.117 - 10106.757: 92.0651% ( 39) 00:14:26.383 10106.757 - 10159.396: 92.3027% ( 33) 00:14:26.383 10159.396 - 10212.035: 92.5259% ( 31) 00:14:26.383 10212.035 - 10264.675: 92.7635% ( 33) 00:14:26.383 10264.675 - 10317.314: 92.9940% ( 32) 00:14:26.383 10317.314 - 10369.953: 93.2532% ( 36) 00:14:26.383 10369.953 - 10422.593: 93.4980% ( 34) 00:14:26.383 10422.593 - 10475.232: 93.7860% ( 40) 00:14:26.383 10475.232 - 10527.871: 94.0812% ( 41) 00:14:26.383 10527.871 - 10580.511: 94.3548% ( 38) 00:14:26.383 10580.511 - 10633.150: 94.6285% ( 38) 00:14:26.384 10633.150 - 10685.790: 94.9237% ( 41) 00:14:26.384 10685.790 - 10738.429: 95.1829% ( 36) 00:14:26.384 10738.429 - 10791.068: 95.4565% ( 38) 00:14:26.384 10791.068 - 10843.708: 95.6869% ( 32) 00:14:26.384 10843.708 - 10896.347: 95.9029% ( 30) 00:14:26.384 10896.347 - 10948.986: 96.1046% ( 28) 00:14:26.384 10948.986 - 11001.626: 96.2990% ( 27) 00:14:26.384 11001.626 - 11054.265: 96.4790% ( 25) 00:14:26.384 11054.265 - 11106.904: 96.6590% ( 25) 00:14:26.384 11106.904 - 11159.544: 96.8534% ( 27) 00:14:26.384 11159.544 - 11212.183: 97.0334% ( 25) 00:14:26.384 11212.183 - 11264.822: 97.1846% ( 21) 00:14:26.384 11264.822 - 11317.462: 97.3070% ( 17) 00:14:26.384 11317.462 - 11370.101: 97.3934% ( 12) 00:14:26.384 11370.101 - 11422.741: 97.4870% ( 13) 00:14:26.384 11422.741 - 11475.380: 97.5446% ( 8) 00:14:26.384 11475.380 - 11528.019: 97.5950% ( 7) 00:14:26.384 11528.019 - 11580.659: 97.6238% ( 4) 00:14:26.384 11580.659 - 11633.298: 97.6526% ( 4) 00:14:26.384 11633.298 - 11685.937: 97.6671% ( 2) 00:14:26.384 11685.937 - 11738.577: 97.6887% ( 3) 00:14:26.384 11738.577 - 11791.216: 97.6959% ( 1) 00:14:26.384 12528.167 - 12580.806: 97.7103% ( 2) 00:14:26.384 12580.806 - 12633.446: 97.7247% ( 2) 00:14:26.384 12633.446 - 12686.085: 97.7391% ( 2) 00:14:26.384 12686.085 - 12738.724: 97.7463% ( 1) 00:14:26.384 12738.724 - 12791.364: 97.7607% ( 2) 00:14:26.384 12791.364 - 12844.003: 97.7751% ( 2) 00:14:26.384 12844.003 - 12896.643: 97.7967% ( 3) 00:14:26.384 12896.643 - 12949.282: 97.8111% ( 2) 00:14:26.384 12949.282 - 13001.921: 97.8255% ( 2) 00:14:26.384 13001.921 - 13054.561: 97.8399% ( 2) 00:14:26.384 13054.561 - 13107.200: 97.8543% ( 2) 00:14:26.384 13107.200 - 13159.839: 97.8687% ( 2) 00:14:26.384 13159.839 - 13212.479: 97.8831% ( 2) 00:14:26.384 13212.479 - 13265.118: 97.9047% ( 3) 00:14:26.384 13265.118 - 13317.757: 97.9191% ( 2) 00:14:26.384 13317.757 - 13370.397: 97.9335% ( 2) 00:14:26.384 13370.397 - 13423.036: 97.9407% ( 1) 00:14:26.384 13423.036 - 13475.676: 97.9551% ( 2) 00:14:26.384 13475.676 - 13580.954: 97.9839% ( 4) 00:14:26.384 13580.954 - 13686.233: 98.0127% ( 4) 00:14:26.384 13686.233 - 13791.512: 98.0415% ( 4) 00:14:26.384 13791.512 - 13896.790: 98.0775% ( 5) 00:14:26.384 13896.790 - 14002.069: 98.1063% ( 4) 00:14:26.384 14002.069 - 14107.348: 98.1351% ( 4) 00:14:26.384 14107.348 - 14212.627: 98.1567% ( 3) 00:14:26.384 14739.020 - 14844.299: 98.1855% ( 4) 00:14:26.384 14844.299 - 14949.578: 98.2071% ( 3) 00:14:26.384 14949.578 - 15054.856: 98.2359% ( 4) 00:14:26.384 15054.856 - 15160.135: 98.2647% ( 4) 00:14:26.384 15160.135 - 15265.414: 98.3007% ( 5) 00:14:26.384 15265.414 - 15370.692: 98.3295% ( 4) 00:14:26.384 15370.692 - 15475.971: 98.3583% ( 4) 00:14:26.384 15475.971 - 15581.250: 98.3799% ( 3) 00:14:26.384 15581.250 - 15686.529: 98.4015% ( 3) 00:14:26.384 15686.529 - 15791.807: 98.4303% ( 4) 00:14:26.384 15791.807 - 15897.086: 98.4591% ( 4) 00:14:26.384 15897.086 - 16002.365: 98.4879% ( 4) 00:14:26.384 16002.365 - 16107.643: 98.5095% ( 3) 00:14:26.384 16107.643 - 16212.922: 98.5383% ( 4) 00:14:26.384 16212.922 - 16318.201: 98.5671% ( 4) 00:14:26.384 16318.201 - 16423.480: 98.5887% ( 3) 00:14:26.384 16423.480 - 16528.758: 98.6175% ( 4) 00:14:26.384 17792.103 - 17897.382: 98.6607% ( 6) 00:14:26.384 17897.382 - 18002.660: 98.6967% ( 5) 00:14:26.384 18002.660 - 18107.939: 98.7399% ( 6) 00:14:26.384 18107.939 - 18213.218: 98.7759% ( 5) 00:14:26.384 18213.218 - 18318.496: 98.8119% ( 5) 00:14:26.384 18318.496 - 18423.775: 98.8479% ( 5) 00:14:26.384 18423.775 - 18529.054: 98.8839% ( 5) 00:14:26.384 18529.054 - 18634.333: 98.9271% ( 6) 00:14:26.384 18634.333 - 18739.611: 98.9559% ( 4) 00:14:26.384 18739.611 - 18844.890: 98.9919% ( 5) 00:14:26.384 18844.890 - 18950.169: 99.0279% ( 5) 00:14:26.384 18950.169 - 19055.447: 99.0711% ( 6) 00:14:26.384 19055.447 - 19160.726: 99.0783% ( 1) 00:14:26.384 32425.844 - 32636.402: 99.1071% ( 4) 00:14:26.384 32636.402 - 32846.959: 99.1575% ( 7) 00:14:26.384 32846.959 - 33057.516: 99.2151% ( 8) 00:14:26.384 33057.516 - 33268.074: 99.2584% ( 6) 00:14:26.384 33268.074 - 33478.631: 99.3232% ( 9) 00:14:26.384 33478.631 - 33689.189: 99.3736% ( 7) 00:14:26.384 33689.189 - 33899.746: 99.4240% ( 7) 00:14:26.384 33899.746 - 34110.304: 99.4816% ( 8) 00:14:26.384 34110.304 - 34320.861: 99.5392% ( 8) 00:14:26.384 38953.124 - 39163.682: 99.5824% ( 6) 00:14:26.384 39163.682 - 39374.239: 99.6400% ( 8) 00:14:26.384 39374.239 - 39584.797: 99.6904% ( 7) 00:14:26.384 39584.797 - 39795.354: 99.7408% ( 7) 00:14:26.384 39795.354 - 40005.912: 99.7984% ( 8) 00:14:26.384 40005.912 - 40216.469: 99.8488% ( 7) 00:14:26.384 40216.469 - 40427.027: 99.9064% ( 8) 00:14:26.384 40427.027 - 40637.584: 99.9568% ( 7) 00:14:26.384 40637.584 - 40848.141: 100.0000% ( 6) 00:14:26.384 00:14:26.384 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:26.384 ============================================================================== 00:14:26.384 Range in us Cumulative IO count 00:14:26.384 7632.707 - 7685.346: 0.0288% ( 4) 00:14:26.384 7685.346 - 7737.986: 0.0864% ( 8) 00:14:26.384 7737.986 - 7790.625: 0.1728% ( 12) 00:14:26.384 7790.625 - 7843.264: 0.3168% ( 20) 00:14:26.384 7843.264 - 7895.904: 0.6696% ( 49) 00:14:26.384 7895.904 - 7948.543: 1.0585% ( 54) 00:14:26.384 7948.543 - 8001.182: 1.7929% ( 102) 00:14:26.384 8001.182 - 8053.822: 2.8802% ( 151) 00:14:26.384 8053.822 - 8106.461: 4.5723% ( 235) 00:14:26.384 8106.461 - 8159.100: 6.3148% ( 242) 00:14:26.384 8159.100 - 8211.740: 8.6046% ( 318) 00:14:26.384 8211.740 - 8264.379: 11.0959% ( 346) 00:14:26.384 8264.379 - 8317.018: 14.0409% ( 409) 00:14:26.384 8317.018 - 8369.658: 17.3171% ( 455) 00:14:26.384 8369.658 - 8422.297: 20.8669% ( 493) 00:14:26.384 8422.297 - 8474.937: 24.6760% ( 529) 00:14:26.384 8474.937 - 8527.576: 28.8666% ( 582) 00:14:26.384 8527.576 - 8580.215: 33.1221% ( 591) 00:14:26.384 8580.215 - 8632.855: 37.8168% ( 652) 00:14:26.384 8632.855 - 8685.494: 42.4179% ( 639) 00:14:26.384 8685.494 - 8738.133: 46.9182% ( 625) 00:14:26.384 8738.133 - 8790.773: 51.3105% ( 610) 00:14:26.384 8790.773 - 8843.412: 55.4507% ( 575) 00:14:26.384 8843.412 - 8896.051: 59.3606% ( 543) 00:14:26.384 8896.051 - 8948.691: 62.9608% ( 500) 00:14:26.384 8948.691 - 9001.330: 66.4459% ( 484) 00:14:26.384 9001.330 - 9053.969: 69.5709% ( 434) 00:14:26.384 9053.969 - 9106.609: 72.5662% ( 416) 00:14:26.384 9106.609 - 9159.248: 75.2088% ( 367) 00:14:26.384 9159.248 - 9211.888: 77.6858% ( 344) 00:14:26.384 9211.888 - 9264.527: 79.8315% ( 298) 00:14:26.384 9264.527 - 9317.166: 81.6820% ( 257) 00:14:26.384 9317.166 - 9369.806: 83.3669% ( 234) 00:14:26.384 9369.806 - 9422.445: 84.7710% ( 195) 00:14:26.384 9422.445 - 9475.084: 85.8727% ( 153) 00:14:26.384 9475.084 - 9527.724: 86.7584% ( 123) 00:14:26.384 9527.724 - 9580.363: 87.5720% ( 113) 00:14:26.384 9580.363 - 9633.002: 88.2849% ( 99) 00:14:26.384 9633.002 - 9685.642: 88.9329% ( 90) 00:14:26.384 9685.642 - 9738.281: 89.5305% ( 83) 00:14:26.385 9738.281 - 9790.920: 90.0562% ( 73) 00:14:26.385 9790.920 - 9843.560: 90.4666% ( 57) 00:14:26.385 9843.560 - 9896.199: 90.8554% ( 54) 00:14:26.385 9896.199 - 9948.839: 91.2010% ( 48) 00:14:26.385 9948.839 - 10001.478: 91.4963% ( 41) 00:14:26.385 10001.478 - 10054.117: 91.7339% ( 33) 00:14:26.385 10054.117 - 10106.757: 91.9067% ( 24) 00:14:26.385 10106.757 - 10159.396: 92.0651% ( 22) 00:14:26.385 10159.396 - 10212.035: 92.2163% ( 21) 00:14:26.385 10212.035 - 10264.675: 92.3747% ( 22) 00:14:26.385 10264.675 - 10317.314: 92.6051% ( 32) 00:14:26.385 10317.314 - 10369.953: 92.8139% ( 29) 00:14:26.385 10369.953 - 10422.593: 93.0804% ( 37) 00:14:26.385 10422.593 - 10475.232: 93.3468% ( 37) 00:14:26.385 10475.232 - 10527.871: 93.6564% ( 43) 00:14:26.385 10527.871 - 10580.511: 93.9084% ( 35) 00:14:26.385 10580.511 - 10633.150: 94.2180% ( 43) 00:14:26.385 10633.150 - 10685.790: 94.5204% ( 42) 00:14:26.385 10685.790 - 10738.429: 94.7869% ( 37) 00:14:26.385 10738.429 - 10791.068: 95.0245% ( 33) 00:14:26.385 10791.068 - 10843.708: 95.2693% ( 34) 00:14:26.385 10843.708 - 10896.347: 95.5141% ( 34) 00:14:26.385 10896.347 - 10948.986: 95.7589% ( 34) 00:14:26.385 10948.986 - 11001.626: 95.9821% ( 31) 00:14:26.385 11001.626 - 11054.265: 96.2198% ( 33) 00:14:26.385 11054.265 - 11106.904: 96.4358% ( 30) 00:14:26.385 11106.904 - 11159.544: 96.6374% ( 28) 00:14:26.385 11159.544 - 11212.183: 96.8174% ( 25) 00:14:26.385 11212.183 - 11264.822: 96.9614% ( 20) 00:14:26.385 11264.822 - 11317.462: 97.1054% ( 20) 00:14:26.385 11317.462 - 11370.101: 97.2494% ( 20) 00:14:26.385 11370.101 - 11422.741: 97.3646% ( 16) 00:14:26.385 11422.741 - 11475.380: 97.4582% ( 13) 00:14:26.385 11475.380 - 11528.019: 97.5374% ( 11) 00:14:26.385 11528.019 - 11580.659: 97.5878% ( 7) 00:14:26.385 11580.659 - 11633.298: 97.6238% ( 5) 00:14:26.385 11633.298 - 11685.937: 97.6454% ( 3) 00:14:26.385 11685.937 - 11738.577: 97.6671% ( 3) 00:14:26.385 11738.577 - 11791.216: 97.6815% ( 2) 00:14:26.385 11791.216 - 11843.855: 97.6887% ( 1) 00:14:26.385 11843.855 - 11896.495: 97.6959% ( 1) 00:14:26.385 12317.610 - 12370.249: 97.7031% ( 1) 00:14:26.385 12370.249 - 12422.888: 97.7175% ( 2) 00:14:26.385 12422.888 - 12475.528: 97.7319% ( 2) 00:14:26.385 12475.528 - 12528.167: 97.7463% ( 2) 00:14:26.385 12528.167 - 12580.806: 97.7535% ( 1) 00:14:26.385 12580.806 - 12633.446: 97.7679% ( 2) 00:14:26.385 12633.446 - 12686.085: 97.7751% ( 1) 00:14:26.385 12686.085 - 12738.724: 97.7895% ( 2) 00:14:26.385 12738.724 - 12791.364: 97.8039% ( 2) 00:14:26.385 12791.364 - 12844.003: 97.8111% ( 1) 00:14:26.385 12844.003 - 12896.643: 97.8183% ( 1) 00:14:26.385 12896.643 - 12949.282: 97.8327% ( 2) 00:14:26.385 12949.282 - 13001.921: 97.8471% ( 2) 00:14:26.385 13001.921 - 13054.561: 97.8615% ( 2) 00:14:26.385 13054.561 - 13107.200: 97.8759% ( 2) 00:14:26.385 13107.200 - 13159.839: 97.8903% ( 2) 00:14:26.385 13159.839 - 13212.479: 97.9047% ( 2) 00:14:26.385 13212.479 - 13265.118: 97.9119% ( 1) 00:14:26.385 13265.118 - 13317.757: 97.9263% ( 2) 00:14:26.385 13317.757 - 13370.397: 97.9407% ( 2) 00:14:26.385 13370.397 - 13423.036: 97.9551% ( 2) 00:14:26.385 13423.036 - 13475.676: 97.9695% ( 2) 00:14:26.385 13475.676 - 13580.954: 97.9911% ( 3) 00:14:26.385 13580.954 - 13686.233: 98.0199% ( 4) 00:14:26.385 13686.233 - 13791.512: 98.0559% ( 5) 00:14:26.385 13791.512 - 13896.790: 98.0847% ( 4) 00:14:26.385 13896.790 - 14002.069: 98.1135% ( 4) 00:14:26.385 14002.069 - 14107.348: 98.1423% ( 4) 00:14:26.385 14107.348 - 14212.627: 98.1567% ( 2) 00:14:26.385 14739.020 - 14844.299: 98.1855% ( 4) 00:14:26.385 14844.299 - 14949.578: 98.2143% ( 4) 00:14:26.385 14949.578 - 15054.856: 98.2431% ( 4) 00:14:26.385 15054.856 - 15160.135: 98.2719% ( 4) 00:14:26.385 15160.135 - 15265.414: 98.3007% ( 4) 00:14:26.385 15265.414 - 15370.692: 98.3223% ( 3) 00:14:26.385 15370.692 - 15475.971: 98.3511% ( 4) 00:14:26.385 15475.971 - 15581.250: 98.3727% ( 3) 00:14:26.385 15581.250 - 15686.529: 98.4015% ( 4) 00:14:26.385 15686.529 - 15791.807: 98.4231% ( 3) 00:14:26.385 15791.807 - 15897.086: 98.4447% ( 3) 00:14:26.385 15897.086 - 16002.365: 98.4735% ( 4) 00:14:26.385 16002.365 - 16107.643: 98.4951% ( 3) 00:14:26.385 16107.643 - 16212.922: 98.5095% ( 2) 00:14:26.385 16212.922 - 16318.201: 98.5311% ( 3) 00:14:26.385 16318.201 - 16423.480: 98.5527% ( 3) 00:14:26.385 16423.480 - 16528.758: 98.5743% ( 3) 00:14:26.385 16528.758 - 16634.037: 98.6031% ( 4) 00:14:26.385 16634.037 - 16739.316: 98.6175% ( 2) 00:14:26.385 16949.873 - 17055.152: 98.6463% ( 4) 00:14:26.385 17055.152 - 17160.431: 98.6679% ( 3) 00:14:26.385 17160.431 - 17265.709: 98.6895% ( 3) 00:14:26.385 17265.709 - 17370.988: 98.7183% ( 4) 00:14:26.385 17370.988 - 17476.267: 98.7399% ( 3) 00:14:26.385 17476.267 - 17581.545: 98.7615% ( 3) 00:14:26.385 17581.545 - 17686.824: 98.7903% ( 4) 00:14:26.385 17686.824 - 17792.103: 98.8119% ( 3) 00:14:26.385 17792.103 - 17897.382: 98.8335% ( 3) 00:14:26.385 17897.382 - 18002.660: 98.8623% ( 4) 00:14:26.385 18002.660 - 18107.939: 98.8839% ( 3) 00:14:26.385 18107.939 - 18213.218: 98.9127% ( 4) 00:14:26.385 18213.218 - 18318.496: 98.9343% ( 3) 00:14:26.385 18318.496 - 18423.775: 98.9559% ( 3) 00:14:26.385 18423.775 - 18529.054: 98.9847% ( 4) 00:14:26.385 18529.054 - 18634.333: 99.0063% ( 3) 00:14:26.385 18634.333 - 18739.611: 99.0351% ( 4) 00:14:26.385 18739.611 - 18844.890: 99.0567% ( 3) 00:14:26.385 18844.890 - 18950.169: 99.0711% ( 2) 00:14:26.385 18950.169 - 19055.447: 99.0783% ( 1) 00:14:26.385 30951.942 - 31162.500: 99.1215% ( 6) 00:14:26.385 31162.500 - 31373.057: 99.1719% ( 7) 00:14:26.385 31373.057 - 31583.614: 99.2224% ( 7) 00:14:26.385 31583.614 - 31794.172: 99.2800% ( 8) 00:14:26.385 31794.172 - 32004.729: 99.3376% ( 8) 00:14:26.385 32004.729 - 32215.287: 99.3880% ( 7) 00:14:26.385 32215.287 - 32425.844: 99.4456% ( 8) 00:14:26.385 32425.844 - 32636.402: 99.4816% ( 5) 00:14:26.385 32636.402 - 32846.959: 99.5320% ( 7) 00:14:26.385 32846.959 - 33057.516: 99.5392% ( 1) 00:14:26.385 37268.665 - 37479.222: 99.5680% ( 4) 00:14:26.385 37479.222 - 37689.780: 99.6256% ( 8) 00:14:26.385 37689.780 - 37900.337: 99.6688% ( 6) 00:14:26.385 37900.337 - 38110.895: 99.7264% ( 8) 00:14:26.385 38110.895 - 38321.452: 99.7840% ( 8) 00:14:26.385 38321.452 - 38532.010: 99.8344% ( 7) 00:14:26.385 38532.010 - 38742.567: 99.8920% ( 8) 00:14:26.385 38742.567 - 38953.124: 99.9424% ( 7) 00:14:26.385 38953.124 - 39163.682: 99.9928% ( 7) 00:14:26.385 39163.682 - 39374.239: 100.0000% ( 1) 00:14:26.385 00:14:26.385 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:26.385 ============================================================================== 00:14:26.385 Range in us Cumulative IO count 00:14:26.385 7632.707 - 7685.346: 0.0072% ( 1) 00:14:26.385 7685.346 - 7737.986: 0.0504% ( 6) 00:14:26.385 7737.986 - 7790.625: 0.1800% ( 18) 00:14:26.385 7790.625 - 7843.264: 0.3744% ( 27) 00:14:26.385 7843.264 - 7895.904: 0.7704% ( 55) 00:14:26.385 7895.904 - 7948.543: 1.2817% ( 71) 00:14:26.385 7948.543 - 8001.182: 2.0305% ( 104) 00:14:26.385 8001.182 - 8053.822: 3.1394% ( 154) 00:14:26.385 8053.822 - 8106.461: 4.5651% ( 198) 00:14:26.385 8106.461 - 8159.100: 6.5020% ( 269) 00:14:26.386 8159.100 - 8211.740: 8.7342% ( 310) 00:14:26.386 8211.740 - 8264.379: 11.2975% ( 356) 00:14:26.386 8264.379 - 8317.018: 14.3073% ( 418) 00:14:26.386 8317.018 - 8369.658: 17.5475% ( 450) 00:14:26.386 8369.658 - 8422.297: 21.0253% ( 483) 00:14:26.386 8422.297 - 8474.937: 25.0072% ( 553) 00:14:26.386 8474.937 - 8527.576: 28.9963% ( 554) 00:14:26.386 8527.576 - 8580.215: 33.1797% ( 581) 00:14:26.386 8580.215 - 8632.855: 37.5936% ( 613) 00:14:26.386 8632.855 - 8685.494: 42.0219% ( 615) 00:14:26.386 8685.494 - 8738.133: 46.5078% ( 623) 00:14:26.386 8738.133 - 8790.773: 50.9361% ( 615) 00:14:26.386 8790.773 - 8843.412: 55.1915% ( 591) 00:14:26.386 8843.412 - 8896.051: 59.1878% ( 555) 00:14:26.386 8896.051 - 8948.691: 62.8744% ( 512) 00:14:26.386 8948.691 - 9001.330: 66.3522% ( 483) 00:14:26.386 9001.330 - 9053.969: 69.6357% ( 456) 00:14:26.386 9053.969 - 9106.609: 72.5518% ( 405) 00:14:26.386 9106.609 - 9159.248: 75.2016% ( 368) 00:14:26.386 9159.248 - 9211.888: 77.4914% ( 318) 00:14:26.386 9211.888 - 9264.527: 79.5939% ( 292) 00:14:26.386 9264.527 - 9317.166: 81.3796% ( 248) 00:14:26.386 9317.166 - 9369.806: 83.0933% ( 238) 00:14:26.386 9369.806 - 9422.445: 84.6702% ( 219) 00:14:26.386 9422.445 - 9475.084: 85.8727% ( 167) 00:14:26.386 9475.084 - 9527.724: 86.8808% ( 140) 00:14:26.386 9527.724 - 9580.363: 87.6584% ( 108) 00:14:26.386 9580.363 - 9633.002: 88.2921% ( 88) 00:14:26.386 9633.002 - 9685.642: 88.9113% ( 86) 00:14:26.386 9685.642 - 9738.281: 89.4657% ( 77) 00:14:26.386 9738.281 - 9790.920: 89.9626% ( 69) 00:14:26.386 9790.920 - 9843.560: 90.4162% ( 63) 00:14:26.386 9843.560 - 9896.199: 90.7330% ( 44) 00:14:26.386 9896.199 - 9948.839: 91.0138% ( 39) 00:14:26.386 9948.839 - 10001.478: 91.2730% ( 36) 00:14:26.386 10001.478 - 10054.117: 91.4819% ( 29) 00:14:26.386 10054.117 - 10106.757: 91.7123% ( 32) 00:14:26.386 10106.757 - 10159.396: 91.9427% ( 32) 00:14:26.386 10159.396 - 10212.035: 92.1155% ( 24) 00:14:26.386 10212.035 - 10264.675: 92.2739% ( 22) 00:14:26.386 10264.675 - 10317.314: 92.4899% ( 30) 00:14:26.386 10317.314 - 10369.953: 92.7131% ( 31) 00:14:26.386 10369.953 - 10422.593: 92.9291% ( 30) 00:14:26.386 10422.593 - 10475.232: 93.1740% ( 34) 00:14:26.386 10475.232 - 10527.871: 93.4116% ( 33) 00:14:26.386 10527.871 - 10580.511: 93.6708% ( 36) 00:14:26.386 10580.511 - 10633.150: 93.9372% ( 37) 00:14:26.386 10633.150 - 10685.790: 94.2252% ( 40) 00:14:26.386 10685.790 - 10738.429: 94.5132% ( 40) 00:14:26.386 10738.429 - 10791.068: 94.7941% ( 39) 00:14:26.386 10791.068 - 10843.708: 95.0605% ( 37) 00:14:26.386 10843.708 - 10896.347: 95.2981% ( 33) 00:14:26.386 10896.347 - 10948.986: 95.5501% ( 35) 00:14:26.386 10948.986 - 11001.626: 95.8021% ( 35) 00:14:26.386 11001.626 - 11054.265: 96.0541% ( 35) 00:14:26.386 11054.265 - 11106.904: 96.2774% ( 31) 00:14:26.386 11106.904 - 11159.544: 96.4934% ( 30) 00:14:26.386 11159.544 - 11212.183: 96.6950% ( 28) 00:14:26.386 11212.183 - 11264.822: 96.8606% ( 23) 00:14:26.386 11264.822 - 11317.462: 97.0190% ( 22) 00:14:26.386 11317.462 - 11370.101: 97.1414% ( 17) 00:14:26.386 11370.101 - 11422.741: 97.2062% ( 9) 00:14:26.386 11422.741 - 11475.380: 97.2566% ( 7) 00:14:26.386 11475.380 - 11528.019: 97.2782% ( 3) 00:14:26.386 11528.019 - 11580.659: 97.3142% ( 5) 00:14:26.386 11580.659 - 11633.298: 97.3430% ( 4) 00:14:26.386 11633.298 - 11685.937: 97.3718% ( 4) 00:14:26.386 11685.937 - 11738.577: 97.4150% ( 6) 00:14:26.386 11738.577 - 11791.216: 97.4438% ( 4) 00:14:26.386 11791.216 - 11843.855: 97.4726% ( 4) 00:14:26.386 11843.855 - 11896.495: 97.5086% ( 5) 00:14:26.386 11896.495 - 11949.134: 97.5374% ( 4) 00:14:26.386 11949.134 - 12001.773: 97.5662% ( 4) 00:14:26.386 12001.773 - 12054.413: 97.5950% ( 4) 00:14:26.386 12054.413 - 12107.052: 97.6310% ( 5) 00:14:26.386 12107.052 - 12159.692: 97.6382% ( 1) 00:14:26.386 12159.692 - 12212.331: 97.6526% ( 2) 00:14:26.386 12212.331 - 12264.970: 97.6671% ( 2) 00:14:26.386 12264.970 - 12317.610: 97.6815% ( 2) 00:14:26.386 12317.610 - 12370.249: 97.6959% ( 2) 00:14:26.386 13107.200 - 13159.839: 97.7031% ( 1) 00:14:26.386 13159.839 - 13212.479: 97.7175% ( 2) 00:14:26.386 13212.479 - 13265.118: 97.7391% ( 3) 00:14:26.386 13265.118 - 13317.757: 97.7535% ( 2) 00:14:26.386 13317.757 - 13370.397: 97.7679% ( 2) 00:14:26.386 13370.397 - 13423.036: 97.7823% ( 2) 00:14:26.386 13423.036 - 13475.676: 97.7967% ( 2) 00:14:26.386 13475.676 - 13580.954: 97.8255% ( 4) 00:14:26.386 13580.954 - 13686.233: 97.8687% ( 6) 00:14:26.386 13686.233 - 13791.512: 97.8831% ( 2) 00:14:26.386 13791.512 - 13896.790: 97.9119% ( 4) 00:14:26.386 13896.790 - 14002.069: 97.9407% ( 4) 00:14:26.386 14002.069 - 14107.348: 97.9695% ( 4) 00:14:26.386 14107.348 - 14212.627: 98.0199% ( 7) 00:14:26.386 14212.627 - 14317.905: 98.0775% ( 8) 00:14:26.386 14317.905 - 14423.184: 98.1279% ( 7) 00:14:26.386 14423.184 - 14528.463: 98.1927% ( 9) 00:14:26.386 14528.463 - 14633.741: 98.2359% ( 6) 00:14:26.386 14633.741 - 14739.020: 98.2935% ( 8) 00:14:26.386 14739.020 - 14844.299: 98.3367% ( 6) 00:14:26.386 14844.299 - 14949.578: 98.3655% ( 4) 00:14:26.386 14949.578 - 15054.856: 98.3943% ( 4) 00:14:26.386 15054.856 - 15160.135: 98.4159% ( 3) 00:14:26.386 15160.135 - 15265.414: 98.4375% ( 3) 00:14:26.386 15265.414 - 15370.692: 98.4663% ( 4) 00:14:26.386 15370.692 - 15475.971: 98.4879% ( 3) 00:14:26.386 15475.971 - 15581.250: 98.5167% ( 4) 00:14:26.386 15581.250 - 15686.529: 98.5455% ( 4) 00:14:26.386 15686.529 - 15791.807: 98.5743% ( 4) 00:14:26.386 15791.807 - 15897.086: 98.6031% ( 4) 00:14:26.386 15897.086 - 16002.365: 98.6175% ( 2) 00:14:26.386 16423.480 - 16528.758: 98.6247% ( 1) 00:14:26.386 16528.758 - 16634.037: 98.6535% ( 4) 00:14:26.386 16634.037 - 16739.316: 98.6751% ( 3) 00:14:26.386 16739.316 - 16844.594: 98.6967% ( 3) 00:14:26.386 16844.594 - 16949.873: 98.7255% ( 4) 00:14:26.386 16949.873 - 17055.152: 98.7543% ( 4) 00:14:26.386 17055.152 - 17160.431: 98.7759% ( 3) 00:14:26.386 17160.431 - 17265.709: 98.7975% ( 3) 00:14:26.386 17265.709 - 17370.988: 98.8263% ( 4) 00:14:26.386 17370.988 - 17476.267: 98.8407% ( 2) 00:14:26.386 17476.267 - 17581.545: 98.8623% ( 3) 00:14:26.386 17581.545 - 17686.824: 98.8839% ( 3) 00:14:26.386 17686.824 - 17792.103: 98.9127% ( 4) 00:14:26.386 17792.103 - 17897.382: 98.9343% ( 3) 00:14:26.386 17897.382 - 18002.660: 98.9559% ( 3) 00:14:26.386 18002.660 - 18107.939: 98.9847% ( 4) 00:14:26.386 18107.939 - 18213.218: 99.0063% ( 3) 00:14:26.386 18213.218 - 18318.496: 99.0279% ( 3) 00:14:26.386 18318.496 - 18423.775: 99.0495% ( 3) 00:14:26.386 18423.775 - 18529.054: 99.0711% ( 3) 00:14:26.386 18529.054 - 18634.333: 99.0783% ( 1) 00:14:26.386 28846.368 - 29056.925: 99.0999% ( 3) 00:14:26.386 29056.925 - 29267.483: 99.1503% ( 7) 00:14:26.386 29267.483 - 29478.040: 99.2079% ( 8) 00:14:26.386 29478.040 - 29688.598: 99.2656% ( 8) 00:14:26.386 29688.598 - 29899.155: 99.3232% ( 8) 00:14:26.386 29899.155 - 30109.712: 99.3736% ( 7) 00:14:26.386 30109.712 - 30320.270: 99.4240% ( 7) 00:14:26.387 30320.270 - 30530.827: 99.4672% ( 6) 00:14:26.387 30530.827 - 30741.385: 99.5248% ( 8) 00:14:26.387 30741.385 - 30951.942: 99.5392% ( 2) 00:14:26.387 35163.091 - 35373.648: 99.5752% ( 5) 00:14:26.387 35373.648 - 35584.206: 99.6400% ( 9) 00:14:26.387 35584.206 - 35794.763: 99.6904% ( 7) 00:14:26.387 35794.763 - 36005.320: 99.7480% ( 8) 00:14:26.387 36005.320 - 36215.878: 99.7984% ( 7) 00:14:26.387 36215.878 - 36426.435: 99.8560% ( 8) 00:14:26.387 36426.435 - 36636.993: 99.9064% ( 7) 00:14:26.387 36636.993 - 36847.550: 99.9568% ( 7) 00:14:26.387 36847.550 - 37058.108: 100.0000% ( 6) 00:14:26.387 00:14:26.387 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:26.387 ============================================================================== 00:14:26.387 Range in us Cumulative IO count 00:14:26.387 7685.346 - 7737.986: 0.0360% ( 5) 00:14:26.387 7737.986 - 7790.625: 0.1512% ( 16) 00:14:26.387 7790.625 - 7843.264: 0.3960% ( 34) 00:14:26.387 7843.264 - 7895.904: 0.6912% ( 41) 00:14:26.387 7895.904 - 7948.543: 1.2169% ( 73) 00:14:26.387 7948.543 - 8001.182: 2.0449% ( 115) 00:14:26.387 8001.182 - 8053.822: 3.1106% ( 148) 00:14:26.387 8053.822 - 8106.461: 4.6371% ( 212) 00:14:26.387 8106.461 - 8159.100: 6.6100% ( 274) 00:14:26.387 8159.100 - 8211.740: 8.7990% ( 304) 00:14:26.387 8211.740 - 8264.379: 11.4775% ( 372) 00:14:26.387 8264.379 - 8317.018: 14.5233% ( 423) 00:14:26.387 8317.018 - 8369.658: 17.7131% ( 443) 00:14:26.387 8369.658 - 8422.297: 21.1694% ( 480) 00:14:26.387 8422.297 - 8474.937: 25.1800% ( 557) 00:14:26.387 8474.937 - 8527.576: 29.0251% ( 534) 00:14:26.387 8527.576 - 8580.215: 33.2373% ( 585) 00:14:26.387 8580.215 - 8632.855: 37.5864% ( 604) 00:14:26.387 8632.855 - 8685.494: 42.0435% ( 619) 00:14:26.387 8685.494 - 8738.133: 46.5726% ( 629) 00:14:26.387 8738.133 - 8790.773: 51.0009% ( 615) 00:14:26.387 8790.773 - 8843.412: 55.2491% ( 590) 00:14:26.387 8843.412 - 8896.051: 59.2454% ( 555) 00:14:26.387 8896.051 - 8948.691: 62.8456% ( 500) 00:14:26.387 8948.691 - 9001.330: 66.1866% ( 464) 00:14:26.387 9001.330 - 9053.969: 69.4124% ( 448) 00:14:26.387 9053.969 - 9106.609: 72.2926% ( 400) 00:14:26.387 9106.609 - 9159.248: 74.8488% ( 355) 00:14:26.387 9159.248 - 9211.888: 77.2033% ( 327) 00:14:26.387 9211.888 - 9264.527: 79.3347% ( 296) 00:14:26.387 9264.527 - 9317.166: 81.0988% ( 245) 00:14:26.387 9317.166 - 9369.806: 82.8773% ( 247) 00:14:26.387 9369.806 - 9422.445: 84.3678% ( 207) 00:14:26.387 9422.445 - 9475.084: 85.6063% ( 172) 00:14:26.387 9475.084 - 9527.724: 86.5423% ( 130) 00:14:26.387 9527.724 - 9580.363: 87.3344% ( 110) 00:14:26.387 9580.363 - 9633.002: 88.0400% ( 98) 00:14:26.387 9633.002 - 9685.642: 88.7025% ( 92) 00:14:26.387 9685.642 - 9738.281: 89.3217% ( 86) 00:14:26.387 9738.281 - 9790.920: 89.8978% ( 80) 00:14:26.387 9790.920 - 9843.560: 90.3154% ( 58) 00:14:26.387 9843.560 - 9896.199: 90.6394% ( 45) 00:14:26.387 9896.199 - 9948.839: 90.9418% ( 42) 00:14:26.387 9948.839 - 10001.478: 91.2010% ( 36) 00:14:26.387 10001.478 - 10054.117: 91.4243% ( 31) 00:14:26.387 10054.117 - 10106.757: 91.6619% ( 33) 00:14:26.387 10106.757 - 10159.396: 91.8707% ( 29) 00:14:26.387 10159.396 - 10212.035: 92.0867% ( 30) 00:14:26.387 10212.035 - 10264.675: 92.3099% ( 31) 00:14:26.387 10264.675 - 10317.314: 92.5331% ( 31) 00:14:26.387 10317.314 - 10369.953: 92.7851% ( 35) 00:14:26.387 10369.953 - 10422.593: 93.0084% ( 31) 00:14:26.387 10422.593 - 10475.232: 93.2748% ( 37) 00:14:26.387 10475.232 - 10527.871: 93.5412% ( 37) 00:14:26.387 10527.871 - 10580.511: 93.8220% ( 39) 00:14:26.387 10580.511 - 10633.150: 94.1028% ( 39) 00:14:26.387 10633.150 - 10685.790: 94.3764% ( 38) 00:14:26.387 10685.790 - 10738.429: 94.6213% ( 34) 00:14:26.387 10738.429 - 10791.068: 94.8373% ( 30) 00:14:26.387 10791.068 - 10843.708: 95.0533% ( 30) 00:14:26.387 10843.708 - 10896.347: 95.2837% ( 32) 00:14:26.387 10896.347 - 10948.986: 95.5213% ( 33) 00:14:26.387 10948.986 - 11001.626: 95.7445% ( 31) 00:14:26.387 11001.626 - 11054.265: 95.9677% ( 31) 00:14:26.387 11054.265 - 11106.904: 96.1910% ( 31) 00:14:26.387 11106.904 - 11159.544: 96.4142% ( 31) 00:14:26.387 11159.544 - 11212.183: 96.6230% ( 29) 00:14:26.387 11212.183 - 11264.822: 96.8318% ( 29) 00:14:26.387 11264.822 - 11317.462: 97.0118% ( 25) 00:14:26.387 11317.462 - 11370.101: 97.1054% ( 13) 00:14:26.387 11370.101 - 11422.741: 97.1774% ( 10) 00:14:26.387 11422.741 - 11475.380: 97.2134% ( 5) 00:14:26.387 11475.380 - 11528.019: 97.2494% ( 5) 00:14:26.387 11528.019 - 11580.659: 97.2854% ( 5) 00:14:26.387 11580.659 - 11633.298: 97.3142% ( 4) 00:14:26.387 11633.298 - 11685.937: 97.3646% ( 7) 00:14:26.387 11685.937 - 11738.577: 97.3934% ( 4) 00:14:26.387 11738.577 - 11791.216: 97.4222% ( 4) 00:14:26.387 11791.216 - 11843.855: 97.4510% ( 4) 00:14:26.387 11843.855 - 11896.495: 97.4798% ( 4) 00:14:26.387 11896.495 - 11949.134: 97.5086% ( 4) 00:14:26.387 11949.134 - 12001.773: 97.5446% ( 5) 00:14:26.387 12001.773 - 12054.413: 97.5806% ( 5) 00:14:26.387 12054.413 - 12107.052: 97.6094% ( 4) 00:14:26.387 12107.052 - 12159.692: 97.6382% ( 4) 00:14:26.387 12159.692 - 12212.331: 97.6671% ( 4) 00:14:26.387 12212.331 - 12264.970: 97.7031% ( 5) 00:14:26.387 12264.970 - 12317.610: 97.7319% ( 4) 00:14:26.387 12317.610 - 12370.249: 97.7751% ( 6) 00:14:26.387 12370.249 - 12422.888: 97.8111% ( 5) 00:14:26.387 12422.888 - 12475.528: 97.8255% ( 2) 00:14:26.387 12475.528 - 12528.167: 97.8399% ( 2) 00:14:26.387 12528.167 - 12580.806: 97.8543% ( 2) 00:14:26.387 12580.806 - 12633.446: 97.8687% ( 2) 00:14:26.387 12633.446 - 12686.085: 97.8831% ( 2) 00:14:26.387 12686.085 - 12738.724: 97.8975% ( 2) 00:14:26.387 13791.512 - 13896.790: 97.9263% ( 4) 00:14:26.387 13896.790 - 14002.069: 97.9623% ( 5) 00:14:26.387 14002.069 - 14107.348: 98.0127% ( 7) 00:14:26.387 14107.348 - 14212.627: 98.0559% ( 6) 00:14:26.387 14212.627 - 14317.905: 98.0847% ( 4) 00:14:26.387 14317.905 - 14423.184: 98.1135% ( 4) 00:14:26.387 14423.184 - 14528.463: 98.1495% ( 5) 00:14:26.387 14528.463 - 14633.741: 98.1927% ( 6) 00:14:26.387 14633.741 - 14739.020: 98.2143% ( 3) 00:14:26.387 14739.020 - 14844.299: 98.2359% ( 3) 00:14:26.387 14844.299 - 14949.578: 98.2719% ( 5) 00:14:26.387 14949.578 - 15054.856: 98.3295% ( 8) 00:14:26.387 15054.856 - 15160.135: 98.3655% ( 5) 00:14:26.387 15160.135 - 15265.414: 98.3943% ( 4) 00:14:26.387 15265.414 - 15370.692: 98.4159% ( 3) 00:14:26.387 15370.692 - 15475.971: 98.4231% ( 1) 00:14:26.387 16002.365 - 16107.643: 98.4375% ( 2) 00:14:26.387 16107.643 - 16212.922: 98.4663% ( 4) 00:14:26.387 16212.922 - 16318.201: 98.4879% ( 3) 00:14:26.387 16318.201 - 16423.480: 98.5167% ( 4) 00:14:26.387 16423.480 - 16528.758: 98.5383% ( 3) 00:14:26.387 16528.758 - 16634.037: 98.5599% ( 3) 00:14:26.387 16634.037 - 16739.316: 98.5815% ( 3) 00:14:26.387 16739.316 - 16844.594: 98.6031% ( 3) 00:14:26.388 16844.594 - 16949.873: 98.6175% ( 2) 00:14:26.388 16949.873 - 17055.152: 98.6463% ( 4) 00:14:26.388 17055.152 - 17160.431: 98.6679% ( 3) 00:14:26.388 17160.431 - 17265.709: 98.6967% ( 4) 00:14:26.388 17265.709 - 17370.988: 98.7183% ( 3) 00:14:26.388 17370.988 - 17476.267: 98.7399% ( 3) 00:14:26.388 17476.267 - 17581.545: 98.7615% ( 3) 00:14:26.388 17581.545 - 17686.824: 98.7975% ( 5) 00:14:26.388 17686.824 - 17792.103: 98.8551% ( 8) 00:14:26.388 17792.103 - 17897.382: 98.9199% ( 9) 00:14:26.388 17897.382 - 18002.660: 98.9775% ( 8) 00:14:26.388 18002.660 - 18107.939: 99.0495% ( 10) 00:14:26.388 18107.939 - 18213.218: 99.0783% ( 4) 00:14:26.388 26846.072 - 26951.351: 99.0855% ( 1) 00:14:26.388 26951.351 - 27161.908: 99.1431% ( 8) 00:14:26.388 27161.908 - 27372.466: 99.1935% ( 7) 00:14:26.388 27372.466 - 27583.023: 99.2440% ( 7) 00:14:26.388 27583.023 - 27793.581: 99.2944% ( 7) 00:14:26.388 27793.581 - 28004.138: 99.3520% ( 8) 00:14:26.388 28004.138 - 28214.696: 99.4024% ( 7) 00:14:26.388 28214.696 - 28425.253: 99.4528% ( 7) 00:14:26.388 28425.253 - 28635.810: 99.5032% ( 7) 00:14:26.388 28635.810 - 28846.368: 99.5392% ( 5) 00:14:26.388 33057.516 - 33268.074: 99.5968% ( 8) 00:14:26.388 33268.074 - 33478.631: 99.6472% ( 7) 00:14:26.388 33478.631 - 33689.189: 99.6976% ( 7) 00:14:26.388 33689.189 - 33899.746: 99.7480% ( 7) 00:14:26.388 33899.746 - 34110.304: 99.8056% ( 8) 00:14:26.388 34110.304 - 34320.861: 99.8560% ( 7) 00:14:26.388 34320.861 - 34531.418: 99.9136% ( 8) 00:14:26.388 34531.418 - 34741.976: 99.9640% ( 7) 00:14:26.388 34741.976 - 34952.533: 100.0000% ( 5) 00:14:26.388 00:14:26.388 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:26.388 ============================================================================== 00:14:26.388 Range in us Cumulative IO count 00:14:26.388 7632.707 - 7685.346: 0.0215% ( 3) 00:14:26.388 7685.346 - 7737.986: 0.0502% ( 4) 00:14:26.388 7737.986 - 7790.625: 0.1362% ( 12) 00:14:26.388 7790.625 - 7843.264: 0.3369% ( 28) 00:14:26.388 7843.264 - 7895.904: 0.6881% ( 49) 00:14:26.388 7895.904 - 7948.543: 1.1970% ( 71) 00:14:26.388 7948.543 - 8001.182: 1.9567% ( 106) 00:14:26.388 8001.182 - 8053.822: 3.0390% ( 151) 00:14:26.388 8053.822 - 8106.461: 4.6373% ( 223) 00:14:26.388 8106.461 - 8159.100: 6.7087% ( 289) 00:14:26.388 8159.100 - 8211.740: 8.9521% ( 313) 00:14:26.388 8211.740 - 8264.379: 11.5396% ( 361) 00:14:26.388 8264.379 - 8317.018: 14.4567% ( 407) 00:14:26.388 8317.018 - 8369.658: 17.6892% ( 451) 00:14:26.388 8369.658 - 8422.297: 21.1296% ( 480) 00:14:26.388 8422.297 - 8474.937: 24.8782% ( 523) 00:14:26.388 8474.937 - 8527.576: 28.8131% ( 549) 00:14:26.388 8527.576 - 8580.215: 33.1207% ( 601) 00:14:26.388 8580.215 - 8632.855: 37.7007% ( 639) 00:14:26.388 8632.855 - 8685.494: 42.2090% ( 629) 00:14:26.388 8685.494 - 8738.133: 46.5740% ( 609) 00:14:26.388 8738.133 - 8790.773: 50.9174% ( 606) 00:14:26.388 8790.773 - 8843.412: 54.9957% ( 569) 00:14:26.388 8843.412 - 8896.051: 58.9880% ( 557) 00:14:26.388 8896.051 - 8948.691: 62.5430% ( 496) 00:14:26.388 8948.691 - 9001.330: 65.8687% ( 464) 00:14:26.388 9001.330 - 9053.969: 69.0152% ( 439) 00:14:26.388 9053.969 - 9106.609: 71.9825% ( 414) 00:14:26.388 9106.609 - 9159.248: 74.6130% ( 367) 00:14:26.388 9159.248 - 9211.888: 76.9495% ( 326) 00:14:26.388 9211.888 - 9264.527: 79.0783% ( 297) 00:14:26.388 9264.527 - 9317.166: 80.8916% ( 253) 00:14:26.388 9317.166 - 9369.806: 82.5760% ( 235) 00:14:26.388 9369.806 - 9422.445: 84.0596% ( 207) 00:14:26.388 9422.445 - 9475.084: 85.2279% ( 163) 00:14:26.388 9475.084 - 9527.724: 86.1454% ( 128) 00:14:26.388 9527.724 - 9580.363: 86.9839% ( 117) 00:14:26.388 9580.363 - 9633.002: 87.7365% ( 105) 00:14:26.388 9633.002 - 9685.642: 88.3888% ( 91) 00:14:26.388 9685.642 - 9738.281: 89.0410% ( 91) 00:14:26.388 9738.281 - 9790.920: 89.5642% ( 73) 00:14:26.388 9790.920 - 9843.560: 90.0588% ( 69) 00:14:26.388 9843.560 - 9896.199: 90.4601% ( 56) 00:14:26.388 9896.199 - 9948.839: 90.8329% ( 52) 00:14:26.388 9948.839 - 10001.478: 91.1411% ( 43) 00:14:26.388 10001.478 - 10054.117: 91.3847% ( 34) 00:14:26.388 10054.117 - 10106.757: 91.6141% ( 32) 00:14:26.388 10106.757 - 10159.396: 91.8076% ( 27) 00:14:26.388 10159.396 - 10212.035: 92.0298% ( 31) 00:14:26.388 10212.035 - 10264.675: 92.2305% ( 28) 00:14:26.388 10264.675 - 10317.314: 92.4097% ( 25) 00:14:26.388 10317.314 - 10369.953: 92.6749% ( 37) 00:14:26.388 10369.953 - 10422.593: 92.9472% ( 38) 00:14:26.388 10422.593 - 10475.232: 93.2268% ( 39) 00:14:26.388 10475.232 - 10527.871: 93.5350% ( 43) 00:14:26.388 10527.871 - 10580.511: 93.8432% ( 43) 00:14:26.388 10580.511 - 10633.150: 94.1299% ( 40) 00:14:26.388 10633.150 - 10685.790: 94.4094% ( 39) 00:14:26.388 10685.790 - 10738.429: 94.7033% ( 41) 00:14:26.388 10738.429 - 10791.068: 94.9685% ( 37) 00:14:26.388 10791.068 - 10843.708: 95.1978% ( 32) 00:14:26.388 10843.708 - 10896.347: 95.4558% ( 36) 00:14:26.388 10896.347 - 10948.986: 95.6924% ( 33) 00:14:26.388 10948.986 - 11001.626: 95.9361% ( 34) 00:14:26.388 11001.626 - 11054.265: 96.1439% ( 29) 00:14:26.388 11054.265 - 11106.904: 96.3303% ( 26) 00:14:26.388 11106.904 - 11159.544: 96.5095% ( 25) 00:14:26.388 11159.544 - 11212.183: 96.6958% ( 26) 00:14:26.388 11212.183 - 11264.822: 96.8392% ( 20) 00:14:26.388 11264.822 - 11317.462: 96.9538% ( 16) 00:14:26.388 11317.462 - 11370.101: 97.0112% ( 8) 00:14:26.388 11370.101 - 11422.741: 97.0255% ( 2) 00:14:26.388 11422.741 - 11475.380: 97.0757% ( 7) 00:14:26.388 11475.380 - 11528.019: 97.1259% ( 7) 00:14:26.388 11528.019 - 11580.659: 97.1760% ( 7) 00:14:26.388 11580.659 - 11633.298: 97.2190% ( 6) 00:14:26.388 11633.298 - 11685.937: 97.2764% ( 8) 00:14:26.388 11685.937 - 11738.577: 97.3194% ( 6) 00:14:26.388 11738.577 - 11791.216: 97.3696% ( 7) 00:14:26.388 11791.216 - 11843.855: 97.4197% ( 7) 00:14:26.388 11843.855 - 11896.495: 97.4627% ( 6) 00:14:26.388 11896.495 - 11949.134: 97.5129% ( 7) 00:14:26.388 11949.134 - 12001.773: 97.5702% ( 8) 00:14:26.388 12001.773 - 12054.413: 97.6132% ( 6) 00:14:26.388 12054.413 - 12107.052: 97.6562% ( 6) 00:14:26.388 12107.052 - 12159.692: 97.6849% ( 4) 00:14:26.388 12159.692 - 12212.331: 97.7208% ( 5) 00:14:26.388 12212.331 - 12264.970: 97.7494% ( 4) 00:14:26.388 12264.970 - 12317.610: 97.7853% ( 5) 00:14:26.388 12317.610 - 12370.249: 97.8211% ( 5) 00:14:26.388 12370.249 - 12422.888: 97.8498% ( 4) 00:14:26.388 12422.888 - 12475.528: 97.8856% ( 5) 00:14:26.388 12475.528 - 12528.167: 97.9143% ( 4) 00:14:26.388 12528.167 - 12580.806: 97.9573% ( 6) 00:14:26.388 12580.806 - 12633.446: 97.9860% ( 4) 00:14:26.388 12633.446 - 12686.085: 98.0146% ( 4) 00:14:26.388 12686.085 - 12738.724: 98.0505% ( 5) 00:14:26.388 12738.724 - 12791.364: 98.0791% ( 4) 00:14:26.388 12791.364 - 12844.003: 98.0935% ( 2) 00:14:26.388 12844.003 - 12896.643: 98.1078% ( 2) 00:14:26.388 12896.643 - 12949.282: 98.1221% ( 2) 00:14:26.388 12949.282 - 13001.921: 98.1365% ( 2) 00:14:26.388 13001.921 - 13054.561: 98.1580% ( 3) 00:14:26.389 13054.561 - 13107.200: 98.1651% ( 1) 00:14:26.389 15686.529 - 15791.807: 98.1938% ( 4) 00:14:26.389 15791.807 - 15897.086: 98.2225% ( 4) 00:14:26.389 15897.086 - 16002.365: 98.2440% ( 3) 00:14:26.389 16002.365 - 16107.643: 98.2726% ( 4) 00:14:26.389 16107.643 - 16212.922: 98.2942% ( 3) 00:14:26.389 16212.922 - 16318.201: 98.3157% ( 3) 00:14:26.389 16318.201 - 16423.480: 98.3443% ( 4) 00:14:26.389 16423.480 - 16528.758: 98.3658% ( 3) 00:14:26.389 16528.758 - 16634.037: 98.3873% ( 3) 00:14:26.389 16634.037 - 16739.316: 98.4088% ( 3) 00:14:26.389 16739.316 - 16844.594: 98.4375% ( 4) 00:14:26.389 16844.594 - 16949.873: 98.4518% ( 2) 00:14:26.389 16949.873 - 17055.152: 98.4733% ( 3) 00:14:26.389 17055.152 - 17160.431: 98.5020% ( 4) 00:14:26.389 17160.431 - 17265.709: 98.5593% ( 8) 00:14:26.389 17265.709 - 17370.988: 98.6239% ( 9) 00:14:26.389 17370.988 - 17476.267: 98.6884% ( 9) 00:14:26.389 17476.267 - 17581.545: 98.7457% ( 8) 00:14:26.389 17581.545 - 17686.824: 98.8102% ( 9) 00:14:26.389 17686.824 - 17792.103: 98.8532% ( 6) 00:14:26.389 17792.103 - 17897.382: 98.8962% ( 6) 00:14:26.389 17897.382 - 18002.660: 98.9321% ( 5) 00:14:26.389 18002.660 - 18107.939: 98.9679% ( 5) 00:14:26.389 18107.939 - 18213.218: 99.0037% ( 5) 00:14:26.389 18213.218 - 18318.496: 99.0396% ( 5) 00:14:26.389 18318.496 - 18423.775: 99.0754% ( 5) 00:14:26.389 18423.775 - 18529.054: 99.0826% ( 1) 00:14:26.389 20002.956 - 20108.235: 99.0969% ( 2) 00:14:26.389 20108.235 - 20213.513: 99.1256% ( 4) 00:14:26.389 20213.513 - 20318.792: 99.1542% ( 4) 00:14:26.389 20318.792 - 20424.071: 99.1829% ( 4) 00:14:26.389 20424.071 - 20529.349: 99.2116% ( 4) 00:14:26.389 20529.349 - 20634.628: 99.2403% ( 4) 00:14:26.389 20634.628 - 20739.907: 99.2689% ( 4) 00:14:26.389 20739.907 - 20845.186: 99.2976% ( 4) 00:14:26.389 20845.186 - 20950.464: 99.3263% ( 4) 00:14:26.389 20950.464 - 21055.743: 99.3549% ( 4) 00:14:26.389 21055.743 - 21161.022: 99.3836% ( 4) 00:14:26.389 21161.022 - 21266.300: 99.4051% ( 3) 00:14:26.389 21266.300 - 21371.579: 99.4338% ( 4) 00:14:26.389 21371.579 - 21476.858: 99.4624% ( 4) 00:14:26.389 21476.858 - 21582.137: 99.4911% ( 4) 00:14:26.389 21582.137 - 21687.415: 99.5198% ( 4) 00:14:26.389 21687.415 - 21792.694: 99.5413% ( 3) 00:14:26.389 26319.679 - 26424.957: 99.5628% ( 3) 00:14:26.389 26424.957 - 26530.236: 99.5915% ( 4) 00:14:26.389 26530.236 - 26635.515: 99.6273% ( 5) 00:14:26.389 26635.515 - 26740.794: 99.6488% ( 3) 00:14:26.389 26740.794 - 26846.072: 99.6775% ( 4) 00:14:26.389 26846.072 - 26951.351: 99.6990% ( 3) 00:14:26.389 26951.351 - 27161.908: 99.7491% ( 7) 00:14:26.389 27161.908 - 27372.466: 99.8136% ( 9) 00:14:26.389 27372.466 - 27583.023: 99.8638% ( 7) 00:14:26.389 27583.023 - 27793.581: 99.9212% ( 8) 00:14:26.389 27793.581 - 28004.138: 99.9713% ( 7) 00:14:26.389 28004.138 - 28214.696: 100.0000% ( 4) 00:14:26.389 00:14:26.389 13:33:38 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:14:27.775 Initializing NVMe Controllers 00:14:27.775 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:27.775 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:27.775 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:27.775 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:27.775 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:27.775 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:27.775 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:27.775 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:27.775 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:27.775 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:27.775 Initialization complete. Launching workers. 00:14:27.775 ======================================================== 00:14:27.775 Latency(us) 00:14:27.775 Device Information : IOPS MiB/s Average min max 00:14:27.775 PCIE (0000:00:10.0) NSID 1 from core 0: 10064.40 117.94 12752.84 8412.04 43163.93 00:14:27.775 PCIE (0000:00:11.0) NSID 1 from core 0: 10064.40 117.94 12733.53 8642.02 41343.44 00:14:27.775 PCIE (0000:00:13.0) NSID 1 from core 0: 10064.40 117.94 12713.64 8498.30 40286.84 00:14:27.775 PCIE (0000:00:12.0) NSID 1 from core 0: 10064.40 117.94 12694.56 8538.39 38499.36 00:14:27.775 PCIE (0000:00:12.0) NSID 2 from core 0: 10064.40 117.94 12675.23 8615.16 36673.52 00:14:27.775 PCIE (0000:00:12.0) NSID 3 from core 0: 10064.40 117.94 12655.99 8591.01 34975.58 00:14:27.775 ======================================================== 00:14:27.775 Total : 60386.37 707.65 12704.30 8412.04 43163.93 00:14:27.775 00:14:27.775 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:27.775 ================================================================================= 00:14:27.775 1.00000% : 8896.051us 00:14:27.775 10.00000% : 9580.363us 00:14:27.775 25.00000% : 10264.675us 00:14:27.775 50.00000% : 11949.134us 00:14:27.775 75.00000% : 14633.741us 00:14:27.775 90.00000% : 16423.480us 00:14:27.775 95.00000% : 17476.267us 00:14:27.775 98.00000% : 18739.611us 00:14:27.775 99.00000% : 30109.712us 00:14:27.775 99.50000% : 41269.256us 00:14:27.775 99.90000% : 42953.716us 00:14:27.775 99.99000% : 43164.273us 00:14:27.775 99.99900% : 43164.273us 00:14:27.775 99.99990% : 43164.273us 00:14:27.775 99.99999% : 43164.273us 00:14:27.775 00:14:27.775 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:27.775 ================================================================================= 00:14:27.775 1.00000% : 8896.051us 00:14:27.775 10.00000% : 9580.363us 00:14:27.775 25.00000% : 10159.396us 00:14:27.775 50.00000% : 11896.495us 00:14:27.775 75.00000% : 14633.741us 00:14:27.775 90.00000% : 16423.480us 00:14:27.775 95.00000% : 17476.267us 00:14:27.775 98.00000% : 18739.611us 00:14:27.775 99.00000% : 30530.827us 00:14:27.775 99.50000% : 39584.797us 00:14:27.775 99.90000% : 41058.699us 00:14:27.775 99.99000% : 41479.814us 00:14:27.775 99.99900% : 41479.814us 00:14:27.776 99.99990% : 41479.814us 00:14:27.776 99.99999% : 41479.814us 00:14:27.776 00:14:27.776 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:27.776 ================================================================================= 00:14:27.776 1.00000% : 8948.691us 00:14:27.776 10.00000% : 9580.363us 00:14:27.776 25.00000% : 10212.035us 00:14:27.776 50.00000% : 12001.773us 00:14:27.776 75.00000% : 14528.463us 00:14:27.776 90.00000% : 16318.201us 00:14:27.776 95.00000% : 17160.431us 00:14:27.776 98.00000% : 18950.169us 00:14:27.776 99.00000% : 29899.155us 00:14:27.776 99.50000% : 38532.010us 00:14:27.776 99.90000% : 40005.912us 00:14:27.776 99.99000% : 40427.027us 00:14:27.776 99.99900% : 40427.027us 00:14:27.776 99.99990% : 40427.027us 00:14:27.776 99.99999% : 40427.027us 00:14:27.776 00:14:27.776 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:27.776 ================================================================================= 00:14:27.776 1.00000% : 9001.330us 00:14:27.776 10.00000% : 9527.724us 00:14:27.776 25.00000% : 10212.035us 00:14:27.776 50.00000% : 12001.773us 00:14:27.776 75.00000% : 14633.741us 00:14:27.776 90.00000% : 16318.201us 00:14:27.776 95.00000% : 17160.431us 00:14:27.776 98.00000% : 19055.447us 00:14:27.776 99.00000% : 28635.810us 00:14:27.776 99.50000% : 36847.550us 00:14:27.776 99.90000% : 38321.452us 00:14:27.776 99.99000% : 38532.010us 00:14:27.776 99.99900% : 38532.010us 00:14:27.776 99.99990% : 38532.010us 00:14:27.776 99.99999% : 38532.010us 00:14:27.776 00:14:27.776 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:27.776 ================================================================================= 00:14:27.776 1.00000% : 9001.330us 00:14:27.776 10.00000% : 9580.363us 00:14:27.776 25.00000% : 10264.675us 00:14:27.776 50.00000% : 11949.134us 00:14:27.776 75.00000% : 14528.463us 00:14:27.776 90.00000% : 16212.922us 00:14:27.776 95.00000% : 17370.988us 00:14:27.776 98.00000% : 18634.333us 00:14:27.776 99.00000% : 26951.351us 00:14:27.776 99.50000% : 34952.533us 00:14:27.776 99.90000% : 36426.435us 00:14:27.776 99.99000% : 36847.550us 00:14:27.776 99.99900% : 36847.550us 00:14:27.776 99.99990% : 36847.550us 00:14:27.776 99.99999% : 36847.550us 00:14:27.776 00:14:27.776 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:27.776 ================================================================================= 00:14:27.776 1.00000% : 9001.330us 00:14:27.776 10.00000% : 9580.363us 00:14:27.776 25.00000% : 10264.675us 00:14:27.776 50.00000% : 12001.773us 00:14:27.776 75.00000% : 14528.463us 00:14:27.776 90.00000% : 16212.922us 00:14:27.776 95.00000% : 17370.988us 00:14:27.776 98.00000% : 18529.054us 00:14:27.776 99.00000% : 25477.449us 00:14:27.776 99.50000% : 33268.074us 00:14:27.776 99.90000% : 34741.976us 00:14:27.776 99.99000% : 34952.533us 00:14:27.776 99.99900% : 35163.091us 00:14:27.776 99.99990% : 35163.091us 00:14:27.776 99.99999% : 35163.091us 00:14:27.776 00:14:27.776 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:27.776 ============================================================================== 00:14:27.776 Range in us Cumulative IO count 00:14:27.776 8369.658 - 8422.297: 0.0099% ( 1) 00:14:27.776 8422.297 - 8474.937: 0.0198% ( 1) 00:14:27.776 8474.937 - 8527.576: 0.0989% ( 8) 00:14:27.776 8527.576 - 8580.215: 0.1483% ( 5) 00:14:27.776 8580.215 - 8632.855: 0.1978% ( 5) 00:14:27.776 8632.855 - 8685.494: 0.3066% ( 11) 00:14:27.776 8685.494 - 8738.133: 0.4549% ( 15) 00:14:27.776 8738.133 - 8790.773: 0.6824% ( 23) 00:14:27.776 8790.773 - 8843.412: 0.8900% ( 21) 00:14:27.776 8843.412 - 8896.051: 1.1966% ( 31) 00:14:27.776 8896.051 - 8948.691: 1.5131% ( 32) 00:14:27.776 8948.691 - 9001.330: 1.9383% ( 43) 00:14:27.776 9001.330 - 9053.969: 2.4328% ( 50) 00:14:27.776 9053.969 - 9106.609: 2.6899% ( 26) 00:14:27.776 9106.609 - 9159.248: 2.9173% ( 23) 00:14:27.776 9159.248 - 9211.888: 3.5898% ( 68) 00:14:27.776 9211.888 - 9264.527: 4.2820% ( 70) 00:14:27.776 9264.527 - 9317.166: 5.7259% ( 146) 00:14:27.776 9317.166 - 9369.806: 6.5862% ( 87) 00:14:27.776 9369.806 - 9422.445: 7.5653% ( 99) 00:14:27.776 9422.445 - 9475.084: 8.6333% ( 108) 00:14:27.776 9475.084 - 9527.724: 9.7805% ( 116) 00:14:27.776 9527.724 - 9580.363: 11.2540% ( 149) 00:14:27.776 9580.363 - 9633.002: 13.2812% ( 205) 00:14:27.776 9633.002 - 9685.642: 14.9723% ( 171) 00:14:27.776 9685.642 - 9738.281: 16.5447% ( 159) 00:14:27.776 9738.281 - 9790.920: 18.2358% ( 171) 00:14:27.776 9790.920 - 9843.560: 19.3236% ( 110) 00:14:27.776 9843.560 - 9896.199: 20.1642% ( 85) 00:14:27.776 9896.199 - 9948.839: 21.1333% ( 98) 00:14:27.776 9948.839 - 10001.478: 21.8256% ( 70) 00:14:27.776 10001.478 - 10054.117: 22.6562% ( 84) 00:14:27.776 10054.117 - 10106.757: 23.5067% ( 86) 00:14:27.776 10106.757 - 10159.396: 24.2286% ( 73) 00:14:27.776 10159.396 - 10212.035: 24.8912% ( 67) 00:14:27.776 10212.035 - 10264.675: 25.4549% ( 57) 00:14:27.776 10264.675 - 10317.314: 26.2658% ( 82) 00:14:27.776 10317.314 - 10369.953: 26.9383% ( 68) 00:14:27.776 10369.953 - 10422.593: 27.6108% ( 68) 00:14:27.776 10422.593 - 10475.232: 28.3426% ( 74) 00:14:27.776 10475.232 - 10527.871: 28.7975% ( 46) 00:14:27.776 10527.871 - 10580.511: 29.2029% ( 41) 00:14:27.776 10580.511 - 10633.150: 29.6084% ( 41) 00:14:27.776 10633.150 - 10685.790: 30.5083% ( 91) 00:14:27.776 10685.790 - 10738.429: 31.1709% ( 67) 00:14:27.776 10738.429 - 10791.068: 31.8829% ( 72) 00:14:27.776 10791.068 - 10843.708: 32.4367% ( 56) 00:14:27.776 10843.708 - 10896.347: 33.2575% ( 83) 00:14:27.776 10896.347 - 10948.986: 33.9399% ( 69) 00:14:27.776 10948.986 - 11001.626: 34.5332% ( 60) 00:14:27.776 11001.626 - 11054.265: 35.3441% ( 82) 00:14:27.776 11054.265 - 11106.904: 36.0067% ( 67) 00:14:27.776 11106.904 - 11159.544: 36.6001% ( 60) 00:14:27.776 11159.544 - 11212.183: 37.2231% ( 63) 00:14:27.776 11212.183 - 11264.822: 37.7967% ( 58) 00:14:27.776 11264.822 - 11317.462: 38.7658% ( 98) 00:14:27.776 11317.462 - 11370.101: 40.1009% ( 135) 00:14:27.776 11370.101 - 11422.741: 41.1986% ( 111) 00:14:27.776 11422.741 - 11475.380: 42.0293% ( 84) 00:14:27.776 11475.380 - 11528.019: 43.0676% ( 105) 00:14:27.776 11528.019 - 11580.659: 44.0368% ( 98) 00:14:27.776 11580.659 - 11633.298: 45.2631% ( 124) 00:14:27.776 11633.298 - 11685.937: 46.3212% ( 107) 00:14:27.776 11685.937 - 11738.577: 47.3200% ( 101) 00:14:27.776 11738.577 - 11791.216: 48.2199% ( 91) 00:14:27.776 11791.216 - 11843.855: 49.0704% ( 86) 00:14:27.776 11843.855 - 11896.495: 49.8220% ( 76) 00:14:27.776 11896.495 - 11949.134: 50.5044% ( 69) 00:14:27.776 11949.134 - 12001.773: 51.2856% ( 79) 00:14:27.776 12001.773 - 12054.413: 52.0965% ( 82) 00:14:27.776 12054.413 - 12107.052: 52.7888% ( 70) 00:14:27.776 12107.052 - 12159.692: 53.4909% ( 71) 00:14:27.776 12159.692 - 12212.331: 54.2524% ( 77) 00:14:27.776 12212.331 - 12264.970: 54.9743% ( 73) 00:14:27.776 12264.970 - 12317.610: 55.6665% ( 70) 00:14:27.776 12317.610 - 12370.249: 56.3390% ( 68) 00:14:27.776 12370.249 - 12422.888: 57.1598% ( 83) 00:14:27.776 12422.888 - 12475.528: 57.9312% ( 78) 00:14:27.776 12475.528 - 12528.167: 58.5245% ( 60) 00:14:27.776 12528.167 - 12580.806: 58.9794% ( 46) 00:14:27.776 12580.806 - 12633.446: 59.4244% ( 45) 00:14:27.776 12633.446 - 12686.085: 59.8299% ( 41) 00:14:27.776 12686.085 - 12738.724: 60.0870% ( 26) 00:14:27.776 12738.724 - 12791.364: 60.4529% ( 37) 00:14:27.776 12791.364 - 12844.003: 60.7694% ( 32) 00:14:27.776 12844.003 - 12896.643: 61.2638% ( 50) 00:14:27.776 12896.643 - 12949.282: 61.8078% ( 55) 00:14:27.776 12949.282 - 13001.921: 62.3517% ( 55) 00:14:27.776 13001.921 - 13054.561: 62.7176% ( 37) 00:14:27.776 13054.561 - 13107.200: 63.1626% ( 45) 00:14:27.776 13107.200 - 13159.839: 63.5878% ( 43) 00:14:27.776 13159.839 - 13212.479: 64.1713% ( 59) 00:14:27.776 13212.479 - 13265.118: 64.5372% ( 37) 00:14:27.776 13265.118 - 13317.757: 64.9822% ( 45) 00:14:27.776 13317.757 - 13370.397: 65.4173% ( 44) 00:14:27.776 13370.397 - 13423.036: 65.7832% ( 37) 00:14:27.776 13423.036 - 13475.676: 66.2184% ( 44) 00:14:27.776 13475.676 - 13580.954: 66.9304% ( 72) 00:14:27.776 13580.954 - 13686.233: 67.7809% ( 86) 00:14:27.776 13686.233 - 13791.512: 68.6116% ( 84) 00:14:27.776 13791.512 - 13896.790: 69.5411% ( 94) 00:14:27.776 13896.790 - 14002.069: 70.4114% ( 88) 00:14:27.776 14002.069 - 14107.348: 71.1333% ( 73) 00:14:27.776 14107.348 - 14212.627: 71.6475% ( 52) 00:14:27.776 14212.627 - 14317.905: 72.4980% ( 86) 00:14:27.776 14317.905 - 14423.184: 73.3584% ( 87) 00:14:27.776 14423.184 - 14528.463: 74.3770% ( 103) 00:14:27.776 14528.463 - 14633.741: 75.4648% ( 110) 00:14:27.776 14633.741 - 14739.020: 76.3548% ( 90) 00:14:27.776 14739.020 - 14844.299: 77.2350% ( 89) 00:14:27.776 14844.299 - 14949.578: 77.9371% ( 71) 00:14:27.776 14949.578 - 15054.856: 78.8469% ( 92) 00:14:27.776 15054.856 - 15160.135: 79.8952% ( 106) 00:14:27.776 15160.135 - 15265.414: 81.0127% ( 113) 00:14:27.776 15265.414 - 15370.692: 82.0708% ( 107) 00:14:27.776 15370.692 - 15475.971: 83.0400% ( 98) 00:14:27.776 15475.971 - 15581.250: 84.0091% ( 98) 00:14:27.776 15581.250 - 15686.529: 84.9782% ( 98) 00:14:27.776 15686.529 - 15791.807: 85.7397% ( 77) 00:14:27.776 15791.807 - 15897.086: 86.4715% ( 74) 00:14:27.777 15897.086 - 16002.365: 87.2528% ( 79) 00:14:27.777 16002.365 - 16107.643: 87.8857% ( 64) 00:14:27.777 16107.643 - 16212.922: 88.6669% ( 79) 00:14:27.777 16212.922 - 16318.201: 89.4284% ( 77) 00:14:27.777 16318.201 - 16423.480: 90.0811% ( 66) 00:14:27.777 16423.480 - 16528.758: 90.6448% ( 57) 00:14:27.777 16528.758 - 16634.037: 91.4260% ( 79) 00:14:27.777 16634.037 - 16739.316: 91.9502% ( 53) 00:14:27.777 16739.316 - 16844.594: 92.3952% ( 45) 00:14:27.777 16844.594 - 16949.873: 92.8501% ( 46) 00:14:27.777 16949.873 - 17055.152: 93.4830% ( 64) 00:14:27.777 17055.152 - 17160.431: 94.0071% ( 53) 00:14:27.777 17160.431 - 17265.709: 94.4620% ( 46) 00:14:27.777 17265.709 - 17370.988: 94.9268% ( 47) 00:14:27.777 17370.988 - 17476.267: 95.3125% ( 39) 00:14:27.777 17476.267 - 17581.545: 95.5696% ( 26) 00:14:27.777 17581.545 - 17686.824: 95.8465% ( 28) 00:14:27.777 17686.824 - 17792.103: 96.2718% ( 43) 00:14:27.777 17792.103 - 17897.382: 96.5487% ( 28) 00:14:27.777 17897.382 - 18002.660: 96.8058% ( 26) 00:14:27.777 18002.660 - 18107.939: 97.0629% ( 26) 00:14:27.777 18107.939 - 18213.218: 97.2508% ( 19) 00:14:27.777 18213.218 - 18318.496: 97.4189% ( 17) 00:14:27.777 18318.496 - 18423.775: 97.5475% ( 13) 00:14:27.777 18423.775 - 18529.054: 97.7255% ( 18) 00:14:27.777 18529.054 - 18634.333: 97.9727% ( 25) 00:14:27.777 18634.333 - 18739.611: 98.1507% ( 18) 00:14:27.777 18739.611 - 18844.890: 98.2793% ( 13) 00:14:27.777 18844.890 - 18950.169: 98.3683% ( 9) 00:14:27.777 18950.169 - 19055.447: 98.3979% ( 3) 00:14:27.777 19055.447 - 19160.726: 98.4375% ( 4) 00:14:27.777 19160.726 - 19266.005: 98.5166% ( 8) 00:14:27.777 19266.005 - 19371.284: 98.5957% ( 8) 00:14:27.777 19371.284 - 19476.562: 98.6452% ( 5) 00:14:27.777 19476.562 - 19581.841: 98.6946% ( 5) 00:14:27.777 19581.841 - 19687.120: 98.7342% ( 4) 00:14:27.777 29056.925 - 29267.483: 98.7638% ( 3) 00:14:27.777 29267.483 - 29478.040: 98.8232% ( 6) 00:14:27.777 29478.040 - 29688.598: 98.8825% ( 6) 00:14:27.777 29688.598 - 29899.155: 98.9419% ( 6) 00:14:27.777 29899.155 - 30109.712: 99.0407% ( 10) 00:14:27.777 30109.712 - 30320.270: 99.1001% ( 6) 00:14:27.777 30320.270 - 30530.827: 99.1792% ( 8) 00:14:27.777 30530.827 - 30741.385: 99.1990% ( 2) 00:14:27.777 30741.385 - 30951.942: 99.2583% ( 6) 00:14:27.777 30951.942 - 31162.500: 99.3176% ( 6) 00:14:27.777 31162.500 - 31373.057: 99.3671% ( 5) 00:14:27.777 40427.027 - 40637.584: 99.3770% ( 1) 00:14:27.777 40637.584 - 40848.141: 99.4264% ( 5) 00:14:27.777 40848.141 - 41058.699: 99.4858% ( 6) 00:14:27.777 41058.699 - 41269.256: 99.5352% ( 5) 00:14:27.777 41269.256 - 41479.814: 99.5748% ( 4) 00:14:27.777 41479.814 - 41690.371: 99.6242% ( 5) 00:14:27.777 41690.371 - 41900.929: 99.6835% ( 6) 00:14:27.777 41900.929 - 42111.486: 99.7330% ( 5) 00:14:27.777 42111.486 - 42322.043: 99.7923% ( 6) 00:14:27.777 42322.043 - 42532.601: 99.8418% ( 5) 00:14:27.777 42532.601 - 42743.158: 99.8912% ( 5) 00:14:27.777 42743.158 - 42953.716: 99.9506% ( 6) 00:14:27.777 42953.716 - 43164.273: 100.0000% ( 5) 00:14:27.777 00:14:27.777 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:27.777 ============================================================================== 00:14:27.777 Range in us Cumulative IO count 00:14:27.777 8632.855 - 8685.494: 0.0692% ( 7) 00:14:27.777 8685.494 - 8738.133: 0.1780% ( 11) 00:14:27.777 8738.133 - 8790.773: 0.3362% ( 16) 00:14:27.777 8790.773 - 8843.412: 0.6725% ( 34) 00:14:27.777 8843.412 - 8896.051: 1.1076% ( 44) 00:14:27.777 8896.051 - 8948.691: 1.3845% ( 28) 00:14:27.777 8948.691 - 9001.330: 1.8790% ( 50) 00:14:27.777 9001.330 - 9053.969: 2.4624% ( 59) 00:14:27.777 9053.969 - 9106.609: 2.7888% ( 33) 00:14:27.777 9106.609 - 9159.248: 3.0756% ( 29) 00:14:27.777 9159.248 - 9211.888: 3.3722% ( 30) 00:14:27.777 9211.888 - 9264.527: 3.6689% ( 30) 00:14:27.777 9264.527 - 9317.166: 4.3117% ( 65) 00:14:27.777 9317.166 - 9369.806: 5.0040% ( 70) 00:14:27.777 9369.806 - 9422.445: 6.0621% ( 107) 00:14:27.777 9422.445 - 9475.084: 7.5752% ( 153) 00:14:27.777 9475.084 - 9527.724: 9.1278% ( 157) 00:14:27.777 9527.724 - 9580.363: 10.9474% ( 184) 00:14:27.777 9580.363 - 9633.002: 12.8362% ( 191) 00:14:27.777 9633.002 - 9685.642: 14.7646% ( 195) 00:14:27.777 9685.642 - 9738.281: 16.3964% ( 165) 00:14:27.777 9738.281 - 9790.920: 17.9589% ( 158) 00:14:27.777 9790.920 - 9843.560: 19.3137% ( 137) 00:14:27.777 9843.560 - 9896.199: 20.6487% ( 135) 00:14:27.777 9896.199 - 9948.839: 21.9244% ( 129) 00:14:27.777 9948.839 - 10001.478: 23.0123% ( 110) 00:14:27.777 10001.478 - 10054.117: 23.8331% ( 83) 00:14:27.777 10054.117 - 10106.757: 24.6835% ( 86) 00:14:27.777 10106.757 - 10159.396: 25.3857% ( 71) 00:14:27.777 10159.396 - 10212.035: 26.1472% ( 77) 00:14:27.777 10212.035 - 10264.675: 26.8691% ( 73) 00:14:27.777 10264.675 - 10317.314: 27.4130% ( 55) 00:14:27.777 10317.314 - 10369.953: 27.8877% ( 48) 00:14:27.777 10369.953 - 10422.593: 28.2041% ( 32) 00:14:27.777 10422.593 - 10475.232: 28.5206% ( 32) 00:14:27.777 10475.232 - 10527.871: 28.9953% ( 48) 00:14:27.777 10527.871 - 10580.511: 29.5392% ( 55) 00:14:27.777 10580.511 - 10633.150: 30.2413% ( 71) 00:14:27.777 10633.150 - 10685.790: 30.8940% ( 66) 00:14:27.777 10685.790 - 10738.429: 31.5170% ( 63) 00:14:27.777 10738.429 - 10791.068: 32.1203% ( 61) 00:14:27.777 10791.068 - 10843.708: 32.6642% ( 55) 00:14:27.777 10843.708 - 10896.347: 33.1191% ( 46) 00:14:27.777 10896.347 - 10948.986: 33.6135% ( 50) 00:14:27.777 10948.986 - 11001.626: 34.1772% ( 57) 00:14:27.777 11001.626 - 11054.265: 34.7903% ( 62) 00:14:27.777 11054.265 - 11106.904: 35.5320% ( 75) 00:14:27.777 11106.904 - 11159.544: 36.2243% ( 70) 00:14:27.777 11159.544 - 11212.183: 37.1934% ( 98) 00:14:27.777 11212.183 - 11264.822: 38.3109% ( 113) 00:14:27.777 11264.822 - 11317.462: 39.0328% ( 73) 00:14:27.777 11317.462 - 11370.101: 39.7251% ( 70) 00:14:27.777 11370.101 - 11422.741: 40.2789% ( 56) 00:14:27.777 11422.741 - 11475.380: 41.0403% ( 77) 00:14:27.777 11475.380 - 11528.019: 41.9304% ( 90) 00:14:27.777 11528.019 - 11580.659: 43.0479% ( 113) 00:14:27.777 11580.659 - 11633.298: 44.1752% ( 114) 00:14:27.777 11633.298 - 11685.937: 45.5597% ( 140) 00:14:27.777 11685.937 - 11738.577: 46.9640% ( 142) 00:14:27.777 11738.577 - 11791.216: 48.3386% ( 139) 00:14:27.777 11791.216 - 11843.855: 49.8418% ( 152) 00:14:27.777 11843.855 - 11896.495: 51.1175% ( 129) 00:14:27.777 11896.495 - 11949.134: 52.3734% ( 127) 00:14:27.777 11949.134 - 12001.773: 53.3722% ( 101) 00:14:27.777 12001.773 - 12054.413: 54.1436% ( 78) 00:14:27.777 12054.413 - 12107.052: 54.8062% ( 67) 00:14:27.777 12107.052 - 12159.692: 55.3699% ( 57) 00:14:27.777 12159.692 - 12212.331: 55.9731% ( 61) 00:14:27.777 12212.331 - 12264.970: 56.6456% ( 68) 00:14:27.777 12264.970 - 12317.610: 57.0708% ( 43) 00:14:27.777 12317.610 - 12370.249: 57.4169% ( 35) 00:14:27.777 12370.249 - 12422.888: 57.8125% ( 40) 00:14:27.777 12422.888 - 12475.528: 58.3366% ( 53) 00:14:27.777 12475.528 - 12528.167: 58.7915% ( 46) 00:14:27.777 12528.167 - 12580.806: 59.1475% ( 36) 00:14:27.777 12580.806 - 12633.446: 59.4343% ( 29) 00:14:27.777 12633.446 - 12686.085: 59.9288% ( 50) 00:14:27.777 12686.085 - 12738.724: 60.4331% ( 51) 00:14:27.777 12738.724 - 12791.364: 60.8683% ( 44) 00:14:27.777 12791.364 - 12844.003: 61.3528% ( 49) 00:14:27.777 12844.003 - 12896.643: 61.7385% ( 39) 00:14:27.777 12896.643 - 12949.282: 62.3121% ( 58) 00:14:27.777 12949.282 - 13001.921: 62.6187% ( 31) 00:14:27.777 13001.921 - 13054.561: 63.1725% ( 56) 00:14:27.777 13054.561 - 13107.200: 63.5285% ( 36) 00:14:27.778 13107.200 - 13159.839: 63.9933% ( 47) 00:14:27.778 13159.839 - 13212.479: 64.3790% ( 39) 00:14:27.778 13212.479 - 13265.118: 64.7053% ( 33) 00:14:27.778 13265.118 - 13317.757: 65.1108% ( 41) 00:14:27.778 13317.757 - 13370.397: 65.4371% ( 33) 00:14:27.778 13370.397 - 13423.036: 65.8722% ( 44) 00:14:27.778 13423.036 - 13475.676: 66.2184% ( 35) 00:14:27.778 13475.676 - 13580.954: 67.0293% ( 82) 00:14:27.778 13580.954 - 13686.233: 68.0578% ( 104) 00:14:27.778 13686.233 - 13791.512: 68.8588% ( 81) 00:14:27.778 13791.512 - 13896.790: 69.7983% ( 95) 00:14:27.778 13896.790 - 14002.069: 70.6685% ( 88) 00:14:27.778 14002.069 - 14107.348: 71.5487% ( 89) 00:14:27.778 14107.348 - 14212.627: 72.4782% ( 94) 00:14:27.778 14212.627 - 14317.905: 73.4869% ( 102) 00:14:27.778 14317.905 - 14423.184: 74.2188% ( 74) 00:14:27.778 14423.184 - 14528.463: 74.9506% ( 74) 00:14:27.778 14528.463 - 14633.741: 75.6131% ( 67) 00:14:27.778 14633.741 - 14739.020: 76.1373% ( 53) 00:14:27.778 14739.020 - 14844.299: 76.8295% ( 70) 00:14:27.778 14844.299 - 14949.578: 77.8382% ( 102) 00:14:27.778 14949.578 - 15054.856: 78.8271% ( 100) 00:14:27.778 15054.856 - 15160.135: 79.7864% ( 97) 00:14:27.778 15160.135 - 15265.414: 80.4391% ( 66) 00:14:27.778 15265.414 - 15370.692: 81.4082% ( 98) 00:14:27.778 15370.692 - 15475.971: 82.4763% ( 108) 00:14:27.778 15475.971 - 15581.250: 83.4553% ( 99) 00:14:27.778 15581.250 - 15686.529: 84.4739% ( 103) 00:14:27.778 15686.529 - 15791.807: 85.4035% ( 94) 00:14:27.778 15791.807 - 15897.086: 86.4814% ( 109) 00:14:27.778 15897.086 - 16002.365: 87.3418% ( 87) 00:14:27.778 16002.365 - 16107.643: 88.1428% ( 81) 00:14:27.778 16107.643 - 16212.922: 88.7460% ( 61) 00:14:27.778 16212.922 - 16318.201: 89.3196% ( 58) 00:14:27.778 16318.201 - 16423.480: 90.1108% ( 80) 00:14:27.778 16423.480 - 16528.758: 90.8821% ( 78) 00:14:27.778 16528.758 - 16634.037: 91.4953% ( 62) 00:14:27.778 16634.037 - 16739.316: 92.1282% ( 64) 00:14:27.778 16739.316 - 16844.594: 92.8501% ( 73) 00:14:27.778 16844.594 - 16949.873: 93.2951% ( 45) 00:14:27.778 16949.873 - 17055.152: 93.6511% ( 36) 00:14:27.778 17055.152 - 17160.431: 94.0170% ( 37) 00:14:27.778 17160.431 - 17265.709: 94.3434% ( 33) 00:14:27.778 17265.709 - 17370.988: 94.8576% ( 52) 00:14:27.778 17370.988 - 17476.267: 95.3817% ( 53) 00:14:27.778 17476.267 - 17581.545: 95.7377% ( 36) 00:14:27.778 17581.545 - 17686.824: 95.9751% ( 24) 00:14:27.778 17686.824 - 17792.103: 96.1828% ( 21) 00:14:27.778 17792.103 - 17897.382: 96.3608% ( 18) 00:14:27.778 17897.382 - 18002.660: 96.5487% ( 19) 00:14:27.778 18002.660 - 18107.939: 96.7366% ( 19) 00:14:27.778 18107.939 - 18213.218: 96.9442% ( 21) 00:14:27.778 18213.218 - 18318.496: 97.1915% ( 25) 00:14:27.778 18318.496 - 18423.775: 97.4090% ( 22) 00:14:27.778 18423.775 - 18529.054: 97.7057% ( 30) 00:14:27.778 18529.054 - 18634.333: 97.9430% ( 24) 00:14:27.778 18634.333 - 18739.611: 98.1112% ( 17) 00:14:27.778 18739.611 - 18844.890: 98.2595% ( 15) 00:14:27.778 18844.890 - 18950.169: 98.3485% ( 9) 00:14:27.778 18950.169 - 19055.447: 98.4078% ( 6) 00:14:27.778 19055.447 - 19160.726: 98.4474% ( 4) 00:14:27.778 19160.726 - 19266.005: 98.4968% ( 5) 00:14:27.778 19266.005 - 19371.284: 98.5364% ( 4) 00:14:27.778 19371.284 - 19476.562: 98.5858% ( 5) 00:14:27.778 19476.562 - 19581.841: 98.6353% ( 5) 00:14:27.778 19581.841 - 19687.120: 98.6946% ( 6) 00:14:27.778 19687.120 - 19792.398: 98.7342% ( 4) 00:14:27.778 29267.483 - 29478.040: 98.7441% ( 1) 00:14:27.778 29478.040 - 29688.598: 98.8034% ( 6) 00:14:27.778 29688.598 - 29899.155: 98.8627% ( 6) 00:14:27.778 29899.155 - 30109.712: 98.9122% ( 5) 00:14:27.778 30109.712 - 30320.270: 98.9715% ( 6) 00:14:27.778 30320.270 - 30530.827: 99.0210% ( 5) 00:14:27.778 30530.827 - 30741.385: 99.0704% ( 5) 00:14:27.778 30741.385 - 30951.942: 99.1297% ( 6) 00:14:27.778 30951.942 - 31162.500: 99.1891% ( 6) 00:14:27.778 31162.500 - 31373.057: 99.2484% ( 6) 00:14:27.778 31373.057 - 31583.614: 99.2979% ( 5) 00:14:27.778 31583.614 - 31794.172: 99.3572% ( 6) 00:14:27.778 31794.172 - 32004.729: 99.3671% ( 1) 00:14:27.778 38953.124 - 39163.682: 99.3869% ( 2) 00:14:27.778 39163.682 - 39374.239: 99.4561% ( 7) 00:14:27.778 39374.239 - 39584.797: 99.5055% ( 5) 00:14:27.778 39584.797 - 39795.354: 99.5649% ( 6) 00:14:27.778 39795.354 - 40005.912: 99.6242% ( 6) 00:14:27.778 40005.912 - 40216.469: 99.6835% ( 6) 00:14:27.778 40216.469 - 40427.027: 99.7528% ( 7) 00:14:27.778 40427.027 - 40637.584: 99.8022% ( 5) 00:14:27.778 40637.584 - 40848.141: 99.8517% ( 5) 00:14:27.778 40848.141 - 41058.699: 99.9110% ( 6) 00:14:27.778 41058.699 - 41269.256: 99.9703% ( 6) 00:14:27.778 41269.256 - 41479.814: 100.0000% ( 3) 00:14:27.778 00:14:27.778 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:27.778 ============================================================================== 00:14:27.778 Range in us Cumulative IO count 00:14:27.778 8474.937 - 8527.576: 0.0396% ( 4) 00:14:27.778 8527.576 - 8580.215: 0.0989% ( 6) 00:14:27.778 8580.215 - 8632.855: 0.1582% ( 6) 00:14:27.778 8632.855 - 8685.494: 0.2472% ( 9) 00:14:27.778 8685.494 - 8738.133: 0.5142% ( 27) 00:14:27.778 8738.133 - 8790.773: 0.5736% ( 6) 00:14:27.778 8790.773 - 8843.412: 0.6329% ( 6) 00:14:27.778 8843.412 - 8896.051: 0.7714% ( 14) 00:14:27.778 8896.051 - 8948.691: 1.0384% ( 27) 00:14:27.778 8948.691 - 9001.330: 1.4438% ( 41) 00:14:27.778 9001.330 - 9053.969: 1.7801% ( 34) 00:14:27.778 9053.969 - 9106.609: 2.2745% ( 50) 00:14:27.778 9106.609 - 9159.248: 2.7690% ( 50) 00:14:27.778 9159.248 - 9211.888: 3.4019% ( 64) 00:14:27.778 9211.888 - 9264.527: 3.9458% ( 55) 00:14:27.778 9264.527 - 9317.166: 4.6183% ( 68) 00:14:27.778 9317.166 - 9369.806: 5.5775% ( 97) 00:14:27.778 9369.806 - 9422.445: 6.9027% ( 134) 00:14:27.778 9422.445 - 9475.084: 8.4256% ( 154) 00:14:27.778 9475.084 - 9527.724: 9.8200% ( 141) 00:14:27.778 9527.724 - 9580.363: 11.7089% ( 191) 00:14:27.778 9580.363 - 9633.002: 13.5186% ( 183) 00:14:27.778 9633.002 - 9685.642: 15.2987% ( 180) 00:14:27.778 9685.642 - 9738.281: 16.5150% ( 123) 00:14:27.778 9738.281 - 9790.920: 18.1270% ( 163) 00:14:27.778 9790.920 - 9843.560: 19.3236% ( 121) 00:14:27.778 9843.560 - 9896.199: 20.3422% ( 103) 00:14:27.778 9896.199 - 9948.839: 21.2322% ( 90) 00:14:27.778 9948.839 - 10001.478: 21.9442% ( 72) 00:14:27.778 10001.478 - 10054.117: 22.8738% ( 94) 00:14:27.778 10054.117 - 10106.757: 23.6847% ( 82) 00:14:27.778 10106.757 - 10159.396: 24.3572% ( 68) 00:14:27.778 10159.396 - 10212.035: 25.2868% ( 94) 00:14:27.778 10212.035 - 10264.675: 26.1373% ( 86) 00:14:27.778 10264.675 - 10317.314: 27.0866% ( 96) 00:14:27.778 10317.314 - 10369.953: 27.5415% ( 46) 00:14:27.778 10369.953 - 10422.593: 27.9767% ( 44) 00:14:27.778 10422.593 - 10475.232: 28.4217% ( 45) 00:14:27.778 10475.232 - 10527.871: 28.8766% ( 46) 00:14:27.778 10527.871 - 10580.511: 29.3216% ( 45) 00:14:27.778 10580.511 - 10633.150: 29.8259% ( 51) 00:14:27.778 10633.150 - 10685.790: 30.3995% ( 58) 00:14:27.778 10685.790 - 10738.429: 30.8544% ( 46) 00:14:27.778 10738.429 - 10791.068: 31.3588% ( 51) 00:14:27.778 10791.068 - 10843.708: 32.0115% ( 66) 00:14:27.778 10843.708 - 10896.347: 32.7729% ( 77) 00:14:27.778 10896.347 - 10948.986: 33.5344% ( 77) 00:14:27.778 10948.986 - 11001.626: 34.2860% ( 76) 00:14:27.778 11001.626 - 11054.265: 34.8991% ( 62) 00:14:27.778 11054.265 - 11106.904: 35.6804% ( 79) 00:14:27.778 11106.904 - 11159.544: 36.2638% ( 59) 00:14:27.778 11159.544 - 11212.183: 37.1638% ( 91) 00:14:27.779 11212.183 - 11264.822: 37.9945% ( 84) 00:14:27.779 11264.822 - 11317.462: 38.5581% ( 57) 00:14:27.779 11317.462 - 11370.101: 39.0823% ( 53) 00:14:27.779 11370.101 - 11422.741: 39.6460% ( 57) 00:14:27.779 11422.741 - 11475.380: 40.4371% ( 80) 00:14:27.779 11475.380 - 11528.019: 41.3469% ( 92) 00:14:27.779 11528.019 - 11580.659: 42.1974% ( 86) 00:14:27.779 11580.659 - 11633.298: 43.2358% ( 105) 00:14:27.779 11633.298 - 11685.937: 44.3730% ( 115) 00:14:27.779 11685.937 - 11738.577: 45.6883% ( 133) 00:14:27.779 11738.577 - 11791.216: 46.9640% ( 129) 00:14:27.779 11791.216 - 11843.855: 48.2002% ( 125) 00:14:27.779 11843.855 - 11896.495: 49.0309% ( 84) 00:14:27.779 11896.495 - 11949.134: 49.9308% ( 91) 00:14:27.779 11949.134 - 12001.773: 50.7516% ( 83) 00:14:27.779 12001.773 - 12054.413: 51.8394% ( 110) 00:14:27.779 12054.413 - 12107.052: 52.4130% ( 58) 00:14:27.779 12107.052 - 12159.692: 53.0360% ( 63) 00:14:27.779 12159.692 - 12212.331: 53.7184% ( 69) 00:14:27.779 12212.331 - 12264.970: 54.4798% ( 77) 00:14:27.779 12264.970 - 12317.610: 55.0336% ( 56) 00:14:27.779 12317.610 - 12370.249: 55.4786% ( 45) 00:14:27.779 12370.249 - 12422.888: 56.0522% ( 58) 00:14:27.779 12422.888 - 12475.528: 56.5862% ( 54) 00:14:27.779 12475.528 - 12528.167: 57.0609% ( 48) 00:14:27.779 12528.167 - 12580.806: 57.8125% ( 76) 00:14:27.779 12580.806 - 12633.446: 58.7025% ( 90) 00:14:27.779 12633.446 - 12686.085: 59.4244% ( 73) 00:14:27.779 12686.085 - 12738.724: 60.0870% ( 67) 00:14:27.779 12738.724 - 12791.364: 60.7397% ( 66) 00:14:27.779 12791.364 - 12844.003: 61.4913% ( 76) 00:14:27.779 12844.003 - 12896.643: 62.3022% ( 82) 00:14:27.779 12896.643 - 12949.282: 63.0736% ( 78) 00:14:27.779 12949.282 - 13001.921: 63.6373% ( 57) 00:14:27.779 13001.921 - 13054.561: 64.0724% ( 44) 00:14:27.779 13054.561 - 13107.200: 64.4482% ( 38) 00:14:27.779 13107.200 - 13159.839: 64.7844% ( 34) 00:14:27.779 13159.839 - 13212.479: 65.0811% ( 30) 00:14:27.779 13212.479 - 13265.118: 65.4371% ( 36) 00:14:27.779 13265.118 - 13317.757: 65.8030% ( 37) 00:14:27.779 13317.757 - 13370.397: 66.1986% ( 40) 00:14:27.779 13370.397 - 13423.036: 66.5645% ( 37) 00:14:27.779 13423.036 - 13475.676: 66.9205% ( 36) 00:14:27.779 13475.676 - 13580.954: 67.6127% ( 70) 00:14:27.779 13580.954 - 13686.233: 68.4632% ( 86) 00:14:27.779 13686.233 - 13791.512: 69.3236% ( 87) 00:14:27.779 13791.512 - 13896.790: 70.1444% ( 83) 00:14:27.779 13896.790 - 14002.069: 70.9751% ( 84) 00:14:27.779 14002.069 - 14107.348: 71.8750% ( 91) 00:14:27.779 14107.348 - 14212.627: 72.7354% ( 87) 00:14:27.779 14212.627 - 14317.905: 73.6353% ( 91) 00:14:27.779 14317.905 - 14423.184: 74.5649% ( 94) 00:14:27.779 14423.184 - 14528.463: 75.3560% ( 80) 00:14:27.779 14528.463 - 14633.741: 75.9889% ( 64) 00:14:27.779 14633.741 - 14739.020: 76.5526% ( 57) 00:14:27.779 14739.020 - 14844.299: 77.2152% ( 67) 00:14:27.779 14844.299 - 14949.578: 78.0261% ( 82) 00:14:27.779 14949.578 - 15054.856: 79.2029% ( 119) 00:14:27.779 15054.856 - 15160.135: 80.2907% ( 110) 00:14:27.779 15160.135 - 15265.414: 81.5665% ( 129) 00:14:27.779 15265.414 - 15370.692: 82.6444% ( 109) 00:14:27.779 15370.692 - 15475.971: 83.6036% ( 97) 00:14:27.779 15475.971 - 15581.250: 84.3453% ( 75) 00:14:27.779 15581.250 - 15686.529: 85.2354% ( 90) 00:14:27.779 15686.529 - 15791.807: 85.9177% ( 69) 00:14:27.779 15791.807 - 15897.086: 86.7979% ( 89) 00:14:27.779 15897.086 - 16002.365: 87.9648% ( 118) 00:14:27.779 16002.365 - 16107.643: 88.9142% ( 96) 00:14:27.779 16107.643 - 16212.922: 89.6064% ( 70) 00:14:27.779 16212.922 - 16318.201: 90.3184% ( 72) 00:14:27.779 16318.201 - 16423.480: 91.0206% ( 71) 00:14:27.779 16423.480 - 16528.758: 91.6139% ( 60) 00:14:27.779 16528.758 - 16634.037: 92.1183% ( 51) 00:14:27.779 16634.037 - 16739.316: 92.6523% ( 54) 00:14:27.779 16739.316 - 16844.594: 93.3643% ( 72) 00:14:27.779 16844.594 - 16949.873: 93.8192% ( 46) 00:14:27.779 16949.873 - 17055.152: 94.5906% ( 78) 00:14:27.779 17055.152 - 17160.431: 95.2235% ( 64) 00:14:27.779 17160.431 - 17265.709: 95.7773% ( 56) 00:14:27.779 17265.709 - 17370.988: 96.1926% ( 42) 00:14:27.779 17370.988 - 17476.267: 96.4794% ( 29) 00:14:27.779 17476.267 - 17581.545: 96.6574% ( 18) 00:14:27.779 17581.545 - 17686.824: 96.8651% ( 21) 00:14:27.779 17686.824 - 17792.103: 97.0629% ( 20) 00:14:27.779 17792.103 - 17897.382: 97.1915% ( 13) 00:14:27.779 17897.382 - 18002.660: 97.2805% ( 9) 00:14:27.779 18002.660 - 18107.939: 97.3200% ( 4) 00:14:27.779 18107.939 - 18213.218: 97.3695% ( 5) 00:14:27.779 18213.218 - 18318.496: 97.4387% ( 7) 00:14:27.779 18318.496 - 18423.775: 97.5870% ( 15) 00:14:27.779 18423.775 - 18529.054: 97.7255% ( 14) 00:14:27.779 18529.054 - 18634.333: 97.8046% ( 8) 00:14:27.779 18634.333 - 18739.611: 97.8837% ( 8) 00:14:27.779 18739.611 - 18844.890: 97.9727% ( 9) 00:14:27.779 18844.890 - 18950.169: 98.0518% ( 8) 00:14:27.779 18950.169 - 19055.447: 98.1408% ( 9) 00:14:27.779 19055.447 - 19160.726: 98.2199% ( 8) 00:14:27.779 19160.726 - 19266.005: 98.3089% ( 9) 00:14:27.779 19266.005 - 19371.284: 98.3979% ( 9) 00:14:27.779 19371.284 - 19476.562: 98.4771% ( 8) 00:14:27.779 19476.562 - 19581.841: 98.5661% ( 9) 00:14:27.779 19581.841 - 19687.120: 98.6452% ( 8) 00:14:27.779 19687.120 - 19792.398: 98.6946% ( 5) 00:14:27.779 19792.398 - 19897.677: 98.7342% ( 4) 00:14:27.779 28846.368 - 29056.925: 98.7441% ( 1) 00:14:27.779 29267.483 - 29478.040: 98.8331% ( 9) 00:14:27.779 29478.040 - 29688.598: 98.9122% ( 8) 00:14:27.779 29688.598 - 29899.155: 99.0111% ( 10) 00:14:27.779 29899.155 - 30109.712: 99.0704% ( 6) 00:14:27.779 30109.712 - 30320.270: 99.1297% ( 6) 00:14:27.779 30320.270 - 30530.827: 99.1792% ( 5) 00:14:27.779 30530.827 - 30741.385: 99.2286% ( 5) 00:14:27.779 30741.385 - 30951.942: 99.2781% ( 5) 00:14:27.779 30951.942 - 31162.500: 99.3275% ( 5) 00:14:27.779 31162.500 - 31373.057: 99.3671% ( 4) 00:14:27.779 36636.993 - 36847.550: 99.3770% ( 1) 00:14:27.779 37900.337 - 38110.895: 99.3968% ( 2) 00:14:27.779 38110.895 - 38321.452: 99.4561% ( 6) 00:14:27.779 38321.452 - 38532.010: 99.5154% ( 6) 00:14:27.779 38532.010 - 38742.567: 99.5748% ( 6) 00:14:27.779 38742.567 - 38953.124: 99.6341% ( 6) 00:14:27.779 38953.124 - 39163.682: 99.6835% ( 5) 00:14:27.779 39163.682 - 39374.239: 99.7429% ( 6) 00:14:27.779 39374.239 - 39584.797: 99.8022% ( 6) 00:14:27.779 39584.797 - 39795.354: 99.8517% ( 5) 00:14:27.779 39795.354 - 40005.912: 99.9209% ( 7) 00:14:27.779 40005.912 - 40216.469: 99.9802% ( 6) 00:14:27.779 40216.469 - 40427.027: 100.0000% ( 2) 00:14:27.779 00:14:27.779 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:27.779 ============================================================================== 00:14:27.779 Range in us Cumulative IO count 00:14:27.779 8527.576 - 8580.215: 0.0396% ( 4) 00:14:27.779 8580.215 - 8632.855: 0.0791% ( 4) 00:14:27.779 8632.855 - 8685.494: 0.1780% ( 10) 00:14:27.779 8685.494 - 8738.133: 0.2670% ( 9) 00:14:27.779 8738.133 - 8790.773: 0.4055% ( 14) 00:14:27.779 8790.773 - 8843.412: 0.4648% ( 6) 00:14:27.779 8843.412 - 8896.051: 0.5637% ( 10) 00:14:27.779 8896.051 - 8948.691: 0.7318% ( 17) 00:14:27.779 8948.691 - 9001.330: 1.0186% ( 29) 00:14:27.779 9001.330 - 9053.969: 1.4735% ( 46) 00:14:27.779 9053.969 - 9106.609: 1.9680% ( 50) 00:14:27.779 9106.609 - 9159.248: 2.5910% ( 63) 00:14:27.779 9159.248 - 9211.888: 3.4909% ( 91) 00:14:27.779 9211.888 - 9264.527: 4.2524% ( 77) 00:14:27.779 9264.527 - 9317.166: 5.0732% ( 83) 00:14:27.779 9317.166 - 9369.806: 6.1709% ( 111) 00:14:27.779 9369.806 - 9422.445: 7.4466% ( 129) 00:14:27.779 9422.445 - 9475.084: 8.7915% ( 136) 00:14:27.779 9475.084 - 9527.724: 10.2947% ( 152) 00:14:27.779 9527.724 - 9580.363: 11.6495% ( 137) 00:14:27.779 9580.363 - 9633.002: 13.0241% ( 139) 00:14:27.779 9633.002 - 9685.642: 14.5767% ( 157) 00:14:27.779 9685.642 - 9738.281: 16.3074% ( 175) 00:14:27.779 9738.281 - 9790.920: 18.1072% ( 182) 00:14:27.779 9790.920 - 9843.560: 19.7983% ( 171) 00:14:27.779 9843.560 - 9896.199: 21.1234% ( 134) 00:14:27.779 9896.199 - 9948.839: 22.1321% ( 102) 00:14:27.779 9948.839 - 10001.478: 22.9628% ( 84) 00:14:27.779 10001.478 - 10054.117: 23.7045% ( 75) 00:14:27.779 10054.117 - 10106.757: 24.3078% ( 61) 00:14:27.779 10106.757 - 10159.396: 24.8022% ( 50) 00:14:27.779 10159.396 - 10212.035: 25.4846% ( 69) 00:14:27.779 10212.035 - 10264.675: 25.9494% ( 47) 00:14:27.779 10264.675 - 10317.314: 26.5625% ( 62) 00:14:27.779 10317.314 - 10369.953: 26.8592% ( 30) 00:14:27.779 10369.953 - 10422.593: 27.1756% ( 32) 00:14:27.779 10422.593 - 10475.232: 27.6206% ( 45) 00:14:27.779 10475.232 - 10527.871: 27.9371% ( 32) 00:14:27.779 10527.871 - 10580.511: 28.5502% ( 62) 00:14:27.779 10580.511 - 10633.150: 29.2128% ( 67) 00:14:27.779 10633.150 - 10685.790: 29.7864% ( 58) 00:14:27.779 10685.790 - 10738.429: 30.4984% ( 72) 00:14:27.779 10738.429 - 10791.068: 31.0819% ( 59) 00:14:27.779 10791.068 - 10843.708: 31.7247% ( 65) 00:14:27.779 10843.708 - 10896.347: 32.5455% ( 83) 00:14:27.779 10896.347 - 10948.986: 33.4652% ( 93) 00:14:27.779 10948.986 - 11001.626: 34.3157% ( 86) 00:14:27.779 11001.626 - 11054.265: 34.9782% ( 67) 00:14:27.779 11054.265 - 11106.904: 35.6309% ( 66) 00:14:27.779 11106.904 - 11159.544: 36.2638% ( 64) 00:14:27.779 11159.544 - 11212.183: 37.0550% ( 80) 00:14:27.779 11212.183 - 11264.822: 37.5396% ( 49) 00:14:27.780 11264.822 - 11317.462: 38.0439% ( 51) 00:14:27.780 11317.462 - 11370.101: 38.6966% ( 66) 00:14:27.780 11370.101 - 11422.741: 39.2801% ( 59) 00:14:27.780 11422.741 - 11475.380: 39.9426% ( 67) 00:14:27.780 11475.380 - 11528.019: 40.7832% ( 85) 00:14:27.780 11528.019 - 11580.659: 41.7128% ( 94) 00:14:27.780 11580.659 - 11633.298: 42.9391% ( 124) 00:14:27.780 11633.298 - 11685.937: 44.2741% ( 135) 00:14:27.780 11685.937 - 11738.577: 45.6191% ( 136) 00:14:27.780 11738.577 - 11791.216: 46.8552% ( 125) 00:14:27.780 11791.216 - 11843.855: 48.0320% ( 119) 00:14:27.780 11843.855 - 11896.495: 49.0704% ( 105) 00:14:27.780 11896.495 - 11949.134: 49.9110% ( 85) 00:14:27.780 11949.134 - 12001.773: 50.9197% ( 102) 00:14:27.780 12001.773 - 12054.413: 51.8295% ( 92) 00:14:27.780 12054.413 - 12107.052: 52.5514% ( 73) 00:14:27.780 12107.052 - 12159.692: 53.2437% ( 70) 00:14:27.780 12159.692 - 12212.331: 53.7975% ( 56) 00:14:27.780 12212.331 - 12264.970: 54.3809% ( 59) 00:14:27.780 12264.970 - 12317.610: 54.9248% ( 55) 00:14:27.780 12317.610 - 12370.249: 55.5182% ( 60) 00:14:27.780 12370.249 - 12422.888: 56.0127% ( 50) 00:14:27.780 12422.888 - 12475.528: 56.5467% ( 54) 00:14:27.780 12475.528 - 12528.167: 57.1895% ( 65) 00:14:27.780 12528.167 - 12580.806: 57.8026% ( 62) 00:14:27.780 12580.806 - 12633.446: 58.3762% ( 58) 00:14:27.780 12633.446 - 12686.085: 58.8805% ( 51) 00:14:27.780 12686.085 - 12738.724: 59.2860% ( 41) 00:14:27.780 12738.724 - 12791.364: 59.8892% ( 61) 00:14:27.780 12791.364 - 12844.003: 60.2156% ( 33) 00:14:27.780 12844.003 - 12896.643: 60.5716% ( 36) 00:14:27.780 12896.643 - 12949.282: 61.1847% ( 62) 00:14:27.780 12949.282 - 13001.921: 61.8572% ( 68) 00:14:27.780 13001.921 - 13054.561: 62.4209% ( 57) 00:14:27.780 13054.561 - 13107.200: 63.1329% ( 72) 00:14:27.780 13107.200 - 13159.839: 63.7856% ( 66) 00:14:27.780 13159.839 - 13212.479: 64.3394% ( 56) 00:14:27.780 13212.479 - 13265.118: 65.0218% ( 69) 00:14:27.780 13265.118 - 13317.757: 65.8426% ( 83) 00:14:27.780 13317.757 - 13370.397: 66.6733% ( 84) 00:14:27.780 13370.397 - 13423.036: 67.4545% ( 79) 00:14:27.780 13423.036 - 13475.676: 68.1171% ( 67) 00:14:27.780 13475.676 - 13580.954: 68.9082% ( 80) 00:14:27.780 13580.954 - 13686.233: 69.7686% ( 87) 00:14:27.780 13686.233 - 13791.512: 70.3817% ( 62) 00:14:27.780 13791.512 - 13896.790: 70.9157% ( 54) 00:14:27.780 13896.790 - 14002.069: 71.5289% ( 62) 00:14:27.780 14002.069 - 14107.348: 72.3101% ( 79) 00:14:27.780 14107.348 - 14212.627: 72.8738% ( 57) 00:14:27.780 14212.627 - 14317.905: 73.4869% ( 62) 00:14:27.780 14317.905 - 14423.184: 74.0803% ( 60) 00:14:27.780 14423.184 - 14528.463: 74.7330% ( 66) 00:14:27.780 14528.463 - 14633.741: 75.6230% ( 90) 00:14:27.780 14633.741 - 14739.020: 76.5724% ( 96) 00:14:27.780 14739.020 - 14844.299: 77.5811% ( 102) 00:14:27.780 14844.299 - 14949.578: 78.7184% ( 115) 00:14:27.780 14949.578 - 15054.856: 79.9545% ( 125) 00:14:27.780 15054.856 - 15160.135: 80.8544% ( 91) 00:14:27.780 15160.135 - 15265.414: 81.8137% ( 97) 00:14:27.780 15265.414 - 15370.692: 82.5949% ( 79) 00:14:27.780 15370.692 - 15475.971: 83.3762% ( 79) 00:14:27.780 15475.971 - 15581.250: 84.2563% ( 89) 00:14:27.780 15581.250 - 15686.529: 85.1661% ( 92) 00:14:27.780 15686.529 - 15791.807: 86.2342% ( 108) 00:14:27.780 15791.807 - 15897.086: 87.0253% ( 80) 00:14:27.780 15897.086 - 16002.365: 88.1428% ( 113) 00:14:27.780 16002.365 - 16107.643: 88.9241% ( 79) 00:14:27.780 16107.643 - 16212.922: 89.7152% ( 80) 00:14:27.780 16212.922 - 16318.201: 90.6151% ( 91) 00:14:27.780 16318.201 - 16423.480: 91.4557% ( 85) 00:14:27.780 16423.480 - 16528.758: 92.3259% ( 88) 00:14:27.780 16528.758 - 16634.037: 92.8006% ( 48) 00:14:27.780 16634.037 - 16739.316: 93.3841% ( 59) 00:14:27.780 16739.316 - 16844.594: 93.9082% ( 53) 00:14:27.780 16844.594 - 16949.873: 94.3137% ( 41) 00:14:27.780 16949.873 - 17055.152: 94.8675% ( 56) 00:14:27.780 17055.152 - 17160.431: 95.2334% ( 37) 00:14:27.780 17160.431 - 17265.709: 95.5894% ( 36) 00:14:27.780 17265.709 - 17370.988: 95.9157% ( 33) 00:14:27.780 17370.988 - 17476.267: 96.3014% ( 39) 00:14:27.780 17476.267 - 17581.545: 96.4893% ( 19) 00:14:27.780 17581.545 - 17686.824: 96.6278% ( 14) 00:14:27.780 17686.824 - 17792.103: 96.7959% ( 17) 00:14:27.780 17792.103 - 17897.382: 96.9541% ( 16) 00:14:27.780 17897.382 - 18002.660: 97.0827% ( 13) 00:14:27.780 18002.660 - 18107.939: 97.2112% ( 13) 00:14:27.780 18107.939 - 18213.218: 97.3002% ( 9) 00:14:27.780 18213.218 - 18318.496: 97.4387% ( 14) 00:14:27.780 18318.496 - 18423.775: 97.5079% ( 7) 00:14:27.780 18423.775 - 18529.054: 97.5969% ( 9) 00:14:27.780 18529.054 - 18634.333: 97.6760% ( 8) 00:14:27.780 18634.333 - 18739.611: 97.7453% ( 7) 00:14:27.780 18739.611 - 18844.890: 97.8441% ( 10) 00:14:27.780 18844.890 - 18950.169: 97.9331% ( 9) 00:14:27.780 18950.169 - 19055.447: 98.0222% ( 9) 00:14:27.780 19055.447 - 19160.726: 98.1210% ( 10) 00:14:27.780 19160.726 - 19266.005: 98.2100% ( 9) 00:14:27.780 19266.005 - 19371.284: 98.2991% ( 9) 00:14:27.780 19371.284 - 19476.562: 98.3881% ( 9) 00:14:27.780 19476.562 - 19581.841: 98.4672% ( 8) 00:14:27.780 19581.841 - 19687.120: 98.5364% ( 7) 00:14:27.780 19687.120 - 19792.398: 98.5858% ( 5) 00:14:27.780 19792.398 - 19897.677: 98.6353% ( 5) 00:14:27.780 19897.677 - 20002.956: 98.6748% ( 4) 00:14:27.780 20002.956 - 20108.235: 98.7243% ( 5) 00:14:27.780 20108.235 - 20213.513: 98.7342% ( 1) 00:14:27.780 27372.466 - 27583.023: 98.7638% ( 3) 00:14:27.780 27583.023 - 27793.581: 98.8232% ( 6) 00:14:27.780 27793.581 - 28004.138: 98.8825% ( 6) 00:14:27.780 28004.138 - 28214.696: 98.9320% ( 5) 00:14:27.780 28214.696 - 28425.253: 98.9913% ( 6) 00:14:27.780 28425.253 - 28635.810: 99.0506% ( 6) 00:14:27.780 28635.810 - 28846.368: 99.1100% ( 6) 00:14:27.780 28846.368 - 29056.925: 99.1693% ( 6) 00:14:27.780 29056.925 - 29267.483: 99.2188% ( 5) 00:14:27.780 29267.483 - 29478.040: 99.2781% ( 6) 00:14:27.780 29478.040 - 29688.598: 99.3275% ( 5) 00:14:27.780 29688.598 - 29899.155: 99.3671% ( 4) 00:14:27.780 36215.878 - 36426.435: 99.4066% ( 4) 00:14:27.780 36426.435 - 36636.993: 99.4759% ( 7) 00:14:27.780 36636.993 - 36847.550: 99.5253% ( 5) 00:14:27.780 36847.550 - 37058.108: 99.5847% ( 6) 00:14:27.780 37058.108 - 37268.665: 99.6440% ( 6) 00:14:27.780 37268.665 - 37479.222: 99.6934% ( 5) 00:14:27.780 37479.222 - 37689.780: 99.7627% ( 7) 00:14:27.780 37689.780 - 37900.337: 99.8220% ( 6) 00:14:27.780 37900.337 - 38110.895: 99.8912% ( 7) 00:14:27.780 38110.895 - 38321.452: 99.9506% ( 6) 00:14:27.780 38321.452 - 38532.010: 100.0000% ( 5) 00:14:27.780 00:14:27.780 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:27.780 ============================================================================== 00:14:27.780 Range in us Cumulative IO count 00:14:27.780 8580.215 - 8632.855: 0.0198% ( 2) 00:14:27.780 8632.855 - 8685.494: 0.0890% ( 7) 00:14:27.780 8685.494 - 8738.133: 0.1483% ( 6) 00:14:27.780 8738.133 - 8790.773: 0.2472% ( 10) 00:14:27.780 8790.773 - 8843.412: 0.5637% ( 32) 00:14:27.780 8843.412 - 8896.051: 0.7318% ( 17) 00:14:27.780 8896.051 - 8948.691: 0.9790% ( 25) 00:14:27.780 8948.691 - 9001.330: 1.2164% ( 24) 00:14:27.780 9001.330 - 9053.969: 1.5229% ( 31) 00:14:27.780 9053.969 - 9106.609: 1.9877% ( 47) 00:14:27.780 9106.609 - 9159.248: 2.6503% ( 67) 00:14:27.780 9159.248 - 9211.888: 3.2931% ( 65) 00:14:27.780 9211.888 - 9264.527: 4.2524% ( 97) 00:14:27.780 9264.527 - 9317.166: 5.0930% ( 85) 00:14:27.780 9317.166 - 9369.806: 5.9237% ( 84) 00:14:27.780 9369.806 - 9422.445: 6.8434% ( 93) 00:14:27.780 9422.445 - 9475.084: 7.7532% ( 92) 00:14:27.780 9475.084 - 9527.724: 9.4146% ( 168) 00:14:27.780 9527.724 - 9580.363: 10.8089% ( 141) 00:14:27.780 9580.363 - 9633.002: 12.2725% ( 148) 00:14:27.780 9633.002 - 9685.642: 13.9834% ( 173) 00:14:27.780 9685.642 - 9738.281: 16.0601% ( 210) 00:14:27.780 9738.281 - 9790.920: 17.6622% ( 162) 00:14:27.780 9790.920 - 9843.560: 19.4225% ( 178) 00:14:27.780 9843.560 - 9896.199: 20.5400% ( 113) 00:14:27.780 9896.199 - 9948.839: 21.4300% ( 90) 00:14:27.780 9948.839 - 10001.478: 22.2903% ( 87) 00:14:27.780 10001.478 - 10054.117: 23.0320% ( 75) 00:14:27.780 10054.117 - 10106.757: 23.9122% ( 89) 00:14:27.780 10106.757 - 10159.396: 24.4956% ( 59) 00:14:27.780 10159.396 - 10212.035: 24.9703% ( 48) 00:14:27.780 10212.035 - 10264.675: 25.5736% ( 61) 00:14:27.780 10264.675 - 10317.314: 25.9691% ( 40) 00:14:27.780 10317.314 - 10369.953: 26.5131% ( 55) 00:14:27.781 10369.953 - 10422.593: 26.9383% ( 43) 00:14:27.781 10422.593 - 10475.232: 27.5514% ( 62) 00:14:27.781 10475.232 - 10527.871: 28.2239% ( 68) 00:14:27.781 10527.871 - 10580.511: 28.7381% ( 52) 00:14:27.781 10580.511 - 10633.150: 29.4897% ( 76) 00:14:27.781 10633.150 - 10685.790: 30.0040% ( 52) 00:14:27.781 10685.790 - 10738.429: 30.4885% ( 49) 00:14:27.781 10738.429 - 10791.068: 31.3786% ( 90) 00:14:27.781 10791.068 - 10843.708: 32.1598% ( 79) 00:14:27.781 10843.708 - 10896.347: 32.9312% ( 78) 00:14:27.781 10896.347 - 10948.986: 33.5839% ( 66) 00:14:27.781 10948.986 - 11001.626: 34.4640% ( 89) 00:14:27.781 11001.626 - 11054.265: 35.1760% ( 72) 00:14:27.781 11054.265 - 11106.904: 35.9177% ( 75) 00:14:27.781 11106.904 - 11159.544: 36.2935% ( 38) 00:14:27.781 11159.544 - 11212.183: 36.7484% ( 46) 00:14:27.781 11212.183 - 11264.822: 37.1440% ( 40) 00:14:27.781 11264.822 - 11317.462: 37.8066% ( 67) 00:14:27.781 11317.462 - 11370.101: 38.4098% ( 61) 00:14:27.781 11370.101 - 11422.741: 39.0625% ( 66) 00:14:27.781 11422.741 - 11475.380: 39.9723% ( 92) 00:14:27.781 11475.380 - 11528.019: 41.0107% ( 105) 00:14:27.781 11528.019 - 11580.659: 42.5534% ( 156) 00:14:27.781 11580.659 - 11633.298: 43.7797% ( 124) 00:14:27.781 11633.298 - 11685.937: 44.8873% ( 112) 00:14:27.781 11685.937 - 11738.577: 46.1333% ( 126) 00:14:27.781 11738.577 - 11791.216: 47.3794% ( 126) 00:14:27.781 11791.216 - 11843.855: 48.4672% ( 110) 00:14:27.781 11843.855 - 11896.495: 49.3770% ( 92) 00:14:27.781 11896.495 - 11949.134: 50.3659% ( 100) 00:14:27.781 11949.134 - 12001.773: 51.2065% ( 85) 00:14:27.781 12001.773 - 12054.413: 51.8790% ( 68) 00:14:27.781 12054.413 - 12107.052: 52.4525% ( 58) 00:14:27.781 12107.052 - 12159.692: 53.1349% ( 69) 00:14:27.781 12159.692 - 12212.331: 53.8667% ( 74) 00:14:27.781 12212.331 - 12264.970: 54.5589% ( 70) 00:14:27.781 12264.970 - 12317.610: 55.2116% ( 66) 00:14:27.781 12317.610 - 12370.249: 55.6468% ( 44) 00:14:27.781 12370.249 - 12422.888: 56.1313% ( 49) 00:14:27.781 12422.888 - 12475.528: 56.5763% ( 45) 00:14:27.781 12475.528 - 12528.167: 57.0510% ( 48) 00:14:27.781 12528.167 - 12580.806: 57.5653% ( 52) 00:14:27.781 12580.806 - 12633.446: 58.2377% ( 68) 00:14:27.781 12633.446 - 12686.085: 58.7520% ( 52) 00:14:27.781 12686.085 - 12738.724: 59.0981% ( 35) 00:14:27.781 12738.724 - 12791.364: 59.3948% ( 30) 00:14:27.781 12791.364 - 12844.003: 59.6519% ( 26) 00:14:27.781 12844.003 - 12896.643: 60.1661% ( 52) 00:14:27.781 12896.643 - 12949.282: 60.7100% ( 55) 00:14:27.781 12949.282 - 13001.921: 61.6001% ( 90) 00:14:27.781 13001.921 - 13054.561: 62.2725% ( 68) 00:14:27.781 13054.561 - 13107.200: 63.1329% ( 87) 00:14:27.781 13107.200 - 13159.839: 63.8746% ( 75) 00:14:27.781 13159.839 - 13212.479: 64.4383% ( 57) 00:14:27.781 13212.479 - 13265.118: 64.9624% ( 53) 00:14:27.781 13265.118 - 13317.757: 65.3975% ( 44) 00:14:27.781 13317.757 - 13370.397: 65.7239% ( 33) 00:14:27.781 13370.397 - 13423.036: 66.0403% ( 32) 00:14:27.781 13423.036 - 13475.676: 66.3667% ( 33) 00:14:27.781 13475.676 - 13580.954: 67.0293% ( 67) 00:14:27.781 13580.954 - 13686.233: 68.1468% ( 113) 00:14:27.781 13686.233 - 13791.512: 68.9280% ( 79) 00:14:27.781 13791.512 - 13896.790: 69.7884% ( 87) 00:14:27.781 13896.790 - 14002.069: 70.6388% ( 86) 00:14:27.781 14002.069 - 14107.348: 71.6871% ( 106) 00:14:27.781 14107.348 - 14212.627: 72.6859% ( 101) 00:14:27.781 14212.627 - 14317.905: 73.9419% ( 127) 00:14:27.781 14317.905 - 14423.184: 74.7033% ( 77) 00:14:27.781 14423.184 - 14528.463: 75.5439% ( 85) 00:14:27.781 14528.463 - 14633.741: 76.5229% ( 99) 00:14:27.781 14633.741 - 14739.020: 77.4624% ( 95) 00:14:27.781 14739.020 - 14844.299: 78.2338% ( 78) 00:14:27.781 14844.299 - 14949.578: 78.9557% ( 73) 00:14:27.781 14949.578 - 15054.856: 79.9446% ( 100) 00:14:27.781 15054.856 - 15160.135: 81.2698% ( 134) 00:14:27.781 15160.135 - 15265.414: 82.3873% ( 113) 00:14:27.781 15265.414 - 15370.692: 83.2278% ( 85) 00:14:27.781 15370.692 - 15475.971: 84.1772% ( 96) 00:14:27.781 15475.971 - 15581.250: 85.0969% ( 93) 00:14:27.781 15581.250 - 15686.529: 86.0166% ( 93) 00:14:27.781 15686.529 - 15791.807: 86.8275% ( 82) 00:14:27.781 15791.807 - 15897.086: 87.5396% ( 72) 00:14:27.781 15897.086 - 16002.365: 88.2318% ( 70) 00:14:27.781 16002.365 - 16107.643: 88.9636% ( 74) 00:14:27.781 16107.643 - 16212.922: 90.1899% ( 124) 00:14:27.781 16212.922 - 16318.201: 90.9316% ( 75) 00:14:27.781 16318.201 - 16423.480: 91.6139% ( 69) 00:14:27.781 16423.480 - 16528.758: 92.3754% ( 77) 00:14:27.781 16528.758 - 16634.037: 92.9490% ( 58) 00:14:27.781 16634.037 - 16739.316: 93.4830% ( 54) 00:14:27.781 16739.316 - 16844.594: 93.9775% ( 50) 00:14:27.781 16844.594 - 16949.873: 94.2247% ( 25) 00:14:27.781 16949.873 - 17055.152: 94.4818% ( 26) 00:14:27.781 17055.152 - 17160.431: 94.7587% ( 28) 00:14:27.781 17160.431 - 17265.709: 94.9862% ( 23) 00:14:27.781 17265.709 - 17370.988: 95.3125% ( 33) 00:14:27.781 17370.988 - 17476.267: 95.8169% ( 51) 00:14:27.781 17476.267 - 17581.545: 96.1333% ( 32) 00:14:27.781 17581.545 - 17686.824: 96.3113% ( 18) 00:14:27.781 17686.824 - 17792.103: 96.4597% ( 15) 00:14:27.781 17792.103 - 17897.382: 96.6673% ( 21) 00:14:27.781 17897.382 - 18002.660: 96.9838% ( 32) 00:14:27.781 18002.660 - 18107.939: 97.1618% ( 18) 00:14:27.781 18107.939 - 18213.218: 97.3398% ( 18) 00:14:27.781 18213.218 - 18318.496: 97.5475% ( 21) 00:14:27.781 18318.496 - 18423.775: 97.7453% ( 20) 00:14:27.781 18423.775 - 18529.054: 97.9331% ( 19) 00:14:27.781 18529.054 - 18634.333: 98.0815% ( 15) 00:14:27.781 18634.333 - 18739.611: 98.1903% ( 11) 00:14:27.781 18739.611 - 18844.890: 98.3188% ( 13) 00:14:27.781 18844.890 - 18950.169: 98.4177% ( 10) 00:14:27.781 18950.169 - 19055.447: 98.5067% ( 9) 00:14:27.781 19055.447 - 19160.726: 98.5957% ( 9) 00:14:27.781 19160.726 - 19266.005: 98.6748% ( 8) 00:14:27.781 19266.005 - 19371.284: 98.7243% ( 5) 00:14:27.781 19371.284 - 19476.562: 98.7342% ( 1) 00:14:27.781 26214.400 - 26319.679: 98.7540% ( 2) 00:14:27.781 26319.679 - 26424.957: 98.8430% ( 9) 00:14:27.781 26424.957 - 26530.236: 98.9221% ( 8) 00:14:27.781 26530.236 - 26635.515: 98.9419% ( 2) 00:14:27.781 26635.515 - 26740.794: 98.9616% ( 2) 00:14:27.781 26740.794 - 26846.072: 98.9814% ( 2) 00:14:27.781 26846.072 - 26951.351: 99.0012% ( 2) 00:14:27.781 26951.351 - 27161.908: 99.0407% ( 4) 00:14:27.781 27161.908 - 27372.466: 99.1001% ( 6) 00:14:27.781 27372.466 - 27583.023: 99.1594% ( 6) 00:14:27.781 27583.023 - 27793.581: 99.2188% ( 6) 00:14:27.781 27793.581 - 28004.138: 99.2781% ( 6) 00:14:27.781 28004.138 - 28214.696: 99.3374% ( 6) 00:14:27.781 28214.696 - 28425.253: 99.3671% ( 3) 00:14:27.781 34320.861 - 34531.418: 99.3770% ( 1) 00:14:27.781 34531.418 - 34741.976: 99.4363% ( 6) 00:14:27.781 34741.976 - 34952.533: 99.5055% ( 7) 00:14:27.781 34952.533 - 35163.091: 99.5550% ( 5) 00:14:27.781 35163.091 - 35373.648: 99.6242% ( 7) 00:14:27.781 35373.648 - 35584.206: 99.6737% ( 5) 00:14:27.781 35584.206 - 35794.763: 99.7429% ( 7) 00:14:27.781 35794.763 - 36005.320: 99.7923% ( 5) 00:14:27.781 36005.320 - 36215.878: 99.8517% ( 6) 00:14:27.781 36215.878 - 36426.435: 99.9209% ( 7) 00:14:27.781 36426.435 - 36636.993: 99.9802% ( 6) 00:14:27.781 36636.993 - 36847.550: 100.0000% ( 2) 00:14:27.781 00:14:27.781 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:27.781 ============================================================================== 00:14:27.781 Range in us Cumulative IO count 00:14:27.781 8580.215 - 8632.855: 0.0198% ( 2) 00:14:27.781 8685.494 - 8738.133: 0.0791% ( 6) 00:14:27.781 8738.133 - 8790.773: 0.1780% ( 10) 00:14:27.781 8790.773 - 8843.412: 0.3066% ( 13) 00:14:27.781 8843.412 - 8896.051: 0.5241% ( 22) 00:14:27.781 8896.051 - 8948.691: 0.8999% ( 38) 00:14:27.781 8948.691 - 9001.330: 1.2065% ( 31) 00:14:27.781 9001.330 - 9053.969: 1.7603% ( 56) 00:14:27.781 9053.969 - 9106.609: 2.2449% ( 49) 00:14:27.781 9106.609 - 9159.248: 2.9272% ( 69) 00:14:27.781 9159.248 - 9211.888: 3.6294% ( 71) 00:14:27.781 9211.888 - 9264.527: 4.3216% ( 70) 00:14:27.781 9264.527 - 9317.166: 4.9842% ( 67) 00:14:27.781 9317.166 - 9369.806: 5.6665% ( 69) 00:14:27.781 9369.806 - 9422.445: 6.5170% ( 86) 00:14:27.781 9422.445 - 9475.084: 7.6147% ( 111) 00:14:27.781 9475.084 - 9527.724: 9.0388% ( 144) 00:14:27.781 9527.724 - 9580.363: 10.7595% ( 174) 00:14:27.781 9580.363 - 9633.002: 12.9648% ( 223) 00:14:27.781 9633.002 - 9685.642: 15.2789% ( 234) 00:14:27.781 9685.642 - 9738.281: 17.2271% ( 197) 00:14:27.781 9738.281 - 9790.920: 18.6116% ( 140) 00:14:27.781 9790.920 - 9843.560: 19.8378% ( 124) 00:14:27.781 9843.560 - 9896.199: 20.7180% ( 89) 00:14:27.781 9896.199 - 9948.839: 21.5091% ( 80) 00:14:27.782 9948.839 - 10001.478: 22.1123% ( 61) 00:14:27.782 10001.478 - 10054.117: 22.9233% ( 82) 00:14:27.782 10054.117 - 10106.757: 23.4078% ( 49) 00:14:27.782 10106.757 - 10159.396: 23.9320% ( 53) 00:14:27.782 10159.396 - 10212.035: 24.5550% ( 63) 00:14:27.782 10212.035 - 10264.675: 25.3066% ( 76) 00:14:27.782 10264.675 - 10317.314: 25.7911% ( 49) 00:14:27.782 10317.314 - 10369.953: 26.3350% ( 55) 00:14:27.782 10369.953 - 10422.593: 26.9877% ( 66) 00:14:27.782 10422.593 - 10475.232: 27.6701% ( 69) 00:14:27.782 10475.232 - 10527.871: 28.1250% ( 46) 00:14:27.782 10527.871 - 10580.511: 28.5305% ( 41) 00:14:27.782 10580.511 - 10633.150: 29.0645% ( 54) 00:14:27.782 10633.150 - 10685.790: 29.8655% ( 81) 00:14:27.782 10685.790 - 10738.429: 30.8544% ( 100) 00:14:27.782 10738.429 - 10791.068: 31.5368% ( 69) 00:14:27.782 10791.068 - 10843.708: 32.1499% ( 62) 00:14:27.782 10843.708 - 10896.347: 32.7037% ( 56) 00:14:27.782 10896.347 - 10948.986: 33.4949% ( 80) 00:14:27.782 10948.986 - 11001.626: 34.1278% ( 64) 00:14:27.782 11001.626 - 11054.265: 34.8794% ( 76) 00:14:27.782 11054.265 - 11106.904: 35.4925% ( 62) 00:14:27.782 11106.904 - 11159.544: 36.1551% ( 67) 00:14:27.782 11159.544 - 11212.183: 36.8869% ( 74) 00:14:27.782 11212.183 - 11264.822: 37.5000% ( 62) 00:14:27.782 11264.822 - 11317.462: 38.2021% ( 71) 00:14:27.782 11317.462 - 11370.101: 38.8548% ( 66) 00:14:27.782 11370.101 - 11422.741: 39.6954% ( 85) 00:14:27.782 11422.741 - 11475.380: 40.4272% ( 74) 00:14:27.782 11475.380 - 11528.019: 41.2184% ( 80) 00:14:27.782 11528.019 - 11580.659: 42.0293% ( 82) 00:14:27.782 11580.659 - 11633.298: 43.0182% ( 100) 00:14:27.782 11633.298 - 11685.937: 44.1159% ( 111) 00:14:27.782 11685.937 - 11738.577: 45.2037% ( 110) 00:14:27.782 11738.577 - 11791.216: 46.3410% ( 115) 00:14:27.782 11791.216 - 11843.855: 47.5672% ( 124) 00:14:27.782 11843.855 - 11896.495: 48.7638% ( 121) 00:14:27.782 11896.495 - 11949.134: 49.9308% ( 118) 00:14:27.782 11949.134 - 12001.773: 50.9593% ( 104) 00:14:27.782 12001.773 - 12054.413: 52.0075% ( 106) 00:14:27.782 12054.413 - 12107.052: 52.8975% ( 90) 00:14:27.782 12107.052 - 12159.692: 53.9260% ( 104) 00:14:27.782 12159.692 - 12212.331: 54.7765% ( 86) 00:14:27.782 12212.331 - 12264.970: 55.3402% ( 57) 00:14:27.782 12264.970 - 12317.610: 55.9039% ( 57) 00:14:27.782 12317.610 - 12370.249: 56.3687% ( 47) 00:14:27.782 12370.249 - 12422.888: 56.7741% ( 41) 00:14:27.782 12422.888 - 12475.528: 57.1400% ( 37) 00:14:27.782 12475.528 - 12528.167: 57.6246% ( 49) 00:14:27.782 12528.167 - 12580.806: 58.0795% ( 46) 00:14:27.782 12580.806 - 12633.446: 58.5047% ( 43) 00:14:27.782 12633.446 - 12686.085: 58.9695% ( 47) 00:14:27.782 12686.085 - 12738.724: 59.4640% ( 50) 00:14:27.782 12738.724 - 12791.364: 59.8596% ( 40) 00:14:27.782 12791.364 - 12844.003: 60.4035% ( 55) 00:14:27.782 12844.003 - 12896.643: 61.1254% ( 73) 00:14:27.782 12896.643 - 12949.282: 61.7484% ( 63) 00:14:27.782 12949.282 - 13001.921: 62.4901% ( 75) 00:14:27.782 13001.921 - 13054.561: 63.2021% ( 72) 00:14:27.782 13054.561 - 13107.200: 63.6768% ( 48) 00:14:27.782 13107.200 - 13159.839: 64.1515% ( 48) 00:14:27.782 13159.839 - 13212.479: 64.6361% ( 49) 00:14:27.782 13212.479 - 13265.118: 65.0119% ( 38) 00:14:27.782 13265.118 - 13317.757: 65.3382% ( 33) 00:14:27.782 13317.757 - 13370.397: 65.6250% ( 29) 00:14:27.782 13370.397 - 13423.036: 65.8525% ( 23) 00:14:27.782 13423.036 - 13475.676: 66.0206% ( 17) 00:14:27.782 13475.676 - 13580.954: 66.4854% ( 47) 00:14:27.782 13580.954 - 13686.233: 67.1875% ( 71) 00:14:27.782 13686.233 - 13791.512: 68.2456% ( 107) 00:14:27.782 13791.512 - 13896.790: 69.2247% ( 99) 00:14:27.782 13896.790 - 14002.069: 70.1543% ( 94) 00:14:27.782 14002.069 - 14107.348: 71.3311% ( 119) 00:14:27.782 14107.348 - 14212.627: 72.3794% ( 106) 00:14:27.782 14212.627 - 14317.905: 73.2496% ( 88) 00:14:27.782 14317.905 - 14423.184: 74.3473% ( 111) 00:14:27.782 14423.184 - 14528.463: 75.2670% ( 93) 00:14:27.782 14528.463 - 14633.741: 75.8900% ( 63) 00:14:27.782 14633.741 - 14739.020: 76.7009% ( 82) 00:14:27.782 14739.020 - 14844.299: 77.6998% ( 101) 00:14:27.782 14844.299 - 14949.578: 78.7678% ( 108) 00:14:27.782 14949.578 - 15054.856: 80.2116% ( 146) 00:14:27.782 15054.856 - 15160.135: 81.1907% ( 99) 00:14:27.782 15160.135 - 15265.414: 82.4268% ( 125) 00:14:27.782 15265.414 - 15370.692: 83.7816% ( 137) 00:14:27.782 15370.692 - 15475.971: 84.8794% ( 111) 00:14:27.782 15475.971 - 15581.250: 85.6804% ( 81) 00:14:27.782 15581.250 - 15686.529: 86.4616% ( 79) 00:14:27.782 15686.529 - 15791.807: 87.3022% ( 85) 00:14:27.782 15791.807 - 15897.086: 88.1725% ( 88) 00:14:27.782 15897.086 - 16002.365: 89.1713% ( 101) 00:14:27.782 16002.365 - 16107.643: 89.7449% ( 58) 00:14:27.782 16107.643 - 16212.922: 90.2888% ( 55) 00:14:27.782 16212.922 - 16318.201: 90.8030% ( 52) 00:14:27.782 16318.201 - 16423.480: 91.2480% ( 45) 00:14:27.782 16423.480 - 16528.758: 91.7029% ( 46) 00:14:27.782 16528.758 - 16634.037: 92.3754% ( 68) 00:14:27.782 16634.037 - 16739.316: 92.8699% ( 50) 00:14:27.782 16739.316 - 16844.594: 93.4237% ( 56) 00:14:27.782 16844.594 - 16949.873: 93.8192% ( 40) 00:14:27.782 16949.873 - 17055.152: 94.2544% ( 44) 00:14:27.782 17055.152 - 17160.431: 94.5906% ( 34) 00:14:27.782 17160.431 - 17265.709: 94.9268% ( 34) 00:14:27.782 17265.709 - 17370.988: 95.2828% ( 36) 00:14:27.782 17370.988 - 17476.267: 95.5103% ( 23) 00:14:27.782 17476.267 - 17581.545: 95.7872% ( 28) 00:14:27.782 17581.545 - 17686.824: 96.1630% ( 38) 00:14:27.782 17686.824 - 17792.103: 96.5091% ( 35) 00:14:27.782 17792.103 - 17897.382: 96.6377% ( 13) 00:14:27.782 17897.382 - 18002.660: 96.7761% ( 14) 00:14:27.782 18002.660 - 18107.939: 96.9640% ( 19) 00:14:27.782 18107.939 - 18213.218: 97.1618% ( 20) 00:14:27.782 18213.218 - 18318.496: 97.5079% ( 35) 00:14:27.782 18318.496 - 18423.775: 97.8738% ( 37) 00:14:27.782 18423.775 - 18529.054: 98.0914% ( 22) 00:14:27.782 18529.054 - 18634.333: 98.3089% ( 22) 00:14:27.782 18634.333 - 18739.611: 98.4771% ( 17) 00:14:27.782 18739.611 - 18844.890: 98.5957% ( 12) 00:14:27.782 18844.890 - 18950.169: 98.6452% ( 5) 00:14:27.782 18950.169 - 19055.447: 98.6748% ( 3) 00:14:27.782 19055.447 - 19160.726: 98.7045% ( 3) 00:14:27.782 19160.726 - 19266.005: 98.7342% ( 3) 00:14:27.782 24845.777 - 24951.055: 98.8331% ( 10) 00:14:27.782 24951.055 - 25056.334: 98.9320% ( 10) 00:14:27.782 25056.334 - 25161.613: 98.9517% ( 2) 00:14:27.782 25161.613 - 25266.892: 98.9715% ( 2) 00:14:27.782 25266.892 - 25372.170: 98.9913% ( 2) 00:14:27.782 25372.170 - 25477.449: 99.0111% ( 2) 00:14:27.782 25477.449 - 25582.728: 99.0407% ( 3) 00:14:27.782 25582.728 - 25688.006: 99.0605% ( 2) 00:14:27.782 25688.006 - 25793.285: 99.0803% ( 2) 00:14:27.782 25793.285 - 25898.564: 99.1100% ( 3) 00:14:27.782 25898.564 - 26003.843: 99.1396% ( 3) 00:14:27.782 26003.843 - 26109.121: 99.1693% ( 3) 00:14:27.782 26109.121 - 26214.400: 99.1990% ( 3) 00:14:27.782 26214.400 - 26319.679: 99.2188% ( 2) 00:14:27.782 26319.679 - 26424.957: 99.2583% ( 4) 00:14:27.782 26424.957 - 26530.236: 99.2880% ( 3) 00:14:27.782 26530.236 - 26635.515: 99.3176% ( 3) 00:14:27.782 26635.515 - 26740.794: 99.3374% ( 2) 00:14:27.782 26740.794 - 26846.072: 99.3671% ( 3) 00:14:27.782 32636.402 - 32846.959: 99.3770% ( 1) 00:14:27.782 32846.959 - 33057.516: 99.4363% ( 6) 00:14:27.782 33057.516 - 33268.074: 99.5055% ( 7) 00:14:27.782 33268.074 - 33478.631: 99.5649% ( 6) 00:14:27.782 33478.631 - 33689.189: 99.6341% ( 7) 00:14:27.782 33689.189 - 33899.746: 99.6934% ( 6) 00:14:27.782 33899.746 - 34110.304: 99.7528% ( 6) 00:14:27.782 34110.304 - 34320.861: 99.8121% ( 6) 00:14:27.782 34320.861 - 34531.418: 99.8714% ( 6) 00:14:27.782 34531.418 - 34741.976: 99.9308% ( 6) 00:14:27.782 34741.976 - 34952.533: 99.9901% ( 6) 00:14:27.782 34952.533 - 35163.091: 100.0000% ( 1) 00:14:27.782 00:14:27.782 13:33:39 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:14:27.782 00:14:27.782 real 0m2.713s 00:14:27.782 user 0m2.306s 00:14:27.782 sys 0m0.307s 00:14:27.782 13:33:39 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.782 13:33:39 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:14:27.782 ************************************ 00:14:27.782 END TEST nvme_perf 00:14:27.782 ************************************ 00:14:27.782 13:33:39 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:14:27.782 13:33:39 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:27.782 13:33:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.782 13:33:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.782 ************************************ 00:14:27.782 START TEST nvme_hello_world 00:14:27.782 ************************************ 00:14:27.782 13:33:39 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:14:28.041 Initializing NVMe Controllers 00:14:28.041 Attached to 0000:00:10.0 00:14:28.041 Namespace ID: 1 size: 6GB 00:14:28.041 Attached to 0000:00:11.0 00:14:28.041 Namespace ID: 1 size: 5GB 00:14:28.041 Attached to 0000:00:13.0 00:14:28.041 Namespace ID: 1 size: 1GB 00:14:28.041 Attached to 0000:00:12.0 00:14:28.041 Namespace ID: 1 size: 4GB 00:14:28.041 Namespace ID: 2 size: 4GB 00:14:28.041 Namespace ID: 3 size: 4GB 00:14:28.041 Initialization complete. 00:14:28.041 INFO: using host memory buffer for IO 00:14:28.041 Hello world! 00:14:28.041 INFO: using host memory buffer for IO 00:14:28.041 Hello world! 00:14:28.041 INFO: using host memory buffer for IO 00:14:28.041 Hello world! 00:14:28.041 INFO: using host memory buffer for IO 00:14:28.041 Hello world! 00:14:28.041 INFO: using host memory buffer for IO 00:14:28.041 Hello world! 00:14:28.041 INFO: using host memory buffer for IO 00:14:28.041 Hello world! 00:14:28.042 00:14:28.042 real 0m0.337s 00:14:28.042 user 0m0.128s 00:14:28.042 sys 0m0.159s 00:14:28.042 13:33:39 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.042 13:33:39 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:28.042 ************************************ 00:14:28.042 END TEST nvme_hello_world 00:14:28.042 ************************************ 00:14:28.042 13:33:39 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:14:28.042 13:33:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:28.042 13:33:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.042 13:33:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:28.042 ************************************ 00:14:28.042 START TEST nvme_sgl 00:14:28.042 ************************************ 00:14:28.042 13:33:39 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:14:28.301 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:14:28.301 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:14:28.301 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:14:28.571 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:14:28.571 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:14:28.571 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:14:28.571 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:14:28.571 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:14:28.571 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:14:28.571 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:14:28.571 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:14:28.571 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:14:28.571 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:14:28.572 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:14:28.572 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:14:28.572 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:14:28.572 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:14:28.572 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:14:28.572 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:14:28.572 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:14:28.572 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:14:28.572 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:14:28.572 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:14:28.572 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:14:28.572 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:14:28.572 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:14:28.572 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:14:28.572 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:14:28.572 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:14:28.572 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:14:28.572 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:14:28.572 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:14:28.572 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:14:28.572 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:14:28.572 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:14:28.572 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:14:28.572 NVMe Readv/Writev Request test 00:14:28.572 Attached to 0000:00:10.0 00:14:28.572 Attached to 0000:00:11.0 00:14:28.572 Attached to 0000:00:13.0 00:14:28.572 Attached to 0000:00:12.0 00:14:28.572 0000:00:10.0: build_io_request_2 test passed 00:14:28.572 0000:00:10.0: build_io_request_4 test passed 00:14:28.572 0000:00:10.0: build_io_request_5 test passed 00:14:28.572 0000:00:10.0: build_io_request_6 test passed 00:14:28.572 0000:00:10.0: build_io_request_7 test passed 00:14:28.572 0000:00:10.0: build_io_request_10 test passed 00:14:28.572 0000:00:11.0: build_io_request_2 test passed 00:14:28.572 0000:00:11.0: build_io_request_4 test passed 00:14:28.572 0000:00:11.0: build_io_request_5 test passed 00:14:28.572 0000:00:11.0: build_io_request_6 test passed 00:14:28.572 0000:00:11.0: build_io_request_7 test passed 00:14:28.572 0000:00:11.0: build_io_request_10 test passed 00:14:28.572 Cleaning up... 00:14:28.572 00:14:28.572 real 0m0.360s 00:14:28.572 user 0m0.184s 00:14:28.572 sys 0m0.129s 00:14:28.572 13:33:40 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.572 13:33:40 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:14:28.572 ************************************ 00:14:28.572 END TEST nvme_sgl 00:14:28.572 ************************************ 00:14:28.572 13:33:40 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:14:28.572 13:33:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:28.572 13:33:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.572 13:33:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:28.572 ************************************ 00:14:28.572 START TEST nvme_e2edp 00:14:28.572 ************************************ 00:14:28.572 13:33:40 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:14:28.864 NVMe Write/Read with End-to-End data protection test 00:14:28.864 Attached to 0000:00:10.0 00:14:28.864 Attached to 0000:00:11.0 00:14:28.864 Attached to 0000:00:13.0 00:14:28.864 Attached to 0000:00:12.0 00:14:28.864 Cleaning up... 00:14:28.864 00:14:28.864 real 0m0.319s 00:14:28.864 user 0m0.125s 00:14:28.864 sys 0m0.145s 00:14:28.864 13:33:40 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.864 13:33:40 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:14:28.864 ************************************ 00:14:28.864 END TEST nvme_e2edp 00:14:28.864 ************************************ 00:14:28.864 13:33:40 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:14:28.864 13:33:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:28.864 13:33:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.864 13:33:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:28.864 ************************************ 00:14:28.864 START TEST nvme_reserve 00:14:28.864 ************************************ 00:14:28.864 13:33:40 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:14:29.123 ===================================================== 00:14:29.123 NVMe Controller at PCI bus 0, device 16, function 0 00:14:29.123 ===================================================== 00:14:29.123 Reservations: Not Supported 00:14:29.123 ===================================================== 00:14:29.123 NVMe Controller at PCI bus 0, device 17, function 0 00:14:29.123 ===================================================== 00:14:29.123 Reservations: Not Supported 00:14:29.123 ===================================================== 00:14:29.123 NVMe Controller at PCI bus 0, device 19, function 0 00:14:29.123 ===================================================== 00:14:29.123 Reservations: Not Supported 00:14:29.123 ===================================================== 00:14:29.123 NVMe Controller at PCI bus 0, device 18, function 0 00:14:29.123 ===================================================== 00:14:29.123 Reservations: Not Supported 00:14:29.123 Reservation test passed 00:14:29.382 00:14:29.382 real 0m0.293s 00:14:29.382 user 0m0.098s 00:14:29.382 sys 0m0.147s 00:14:29.382 13:33:41 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.382 13:33:41 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:14:29.382 ************************************ 00:14:29.382 END TEST nvme_reserve 00:14:29.382 ************************************ 00:14:29.382 13:33:41 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:14:29.382 13:33:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:29.382 13:33:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.382 13:33:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:29.382 ************************************ 00:14:29.382 START TEST nvme_err_injection 00:14:29.382 ************************************ 00:14:29.382 13:33:41 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:14:29.641 NVMe Error Injection test 00:14:29.641 Attached to 0000:00:10.0 00:14:29.641 Attached to 0000:00:11.0 00:14:29.641 Attached to 0000:00:13.0 00:14:29.641 Attached to 0000:00:12.0 00:14:29.641 0000:00:10.0: get features failed as expected 00:14:29.641 0000:00:11.0: get features failed as expected 00:14:29.641 0000:00:13.0: get features failed as expected 00:14:29.641 0000:00:12.0: get features failed as expected 00:14:29.641 0000:00:13.0: get features successfully as expected 00:14:29.641 0000:00:12.0: get features successfully as expected 00:14:29.641 0000:00:10.0: get features successfully as expected 00:14:29.641 0000:00:11.0: get features successfully as expected 00:14:29.641 0000:00:10.0: read failed as expected 00:14:29.641 0000:00:11.0: read failed as expected 00:14:29.641 0000:00:13.0: read failed as expected 00:14:29.641 0000:00:12.0: read failed as expected 00:14:29.641 0000:00:10.0: read successfully as expected 00:14:29.641 0000:00:11.0: read successfully as expected 00:14:29.641 0000:00:13.0: read successfully as expected 00:14:29.641 0000:00:12.0: read successfully as expected 00:14:29.641 Cleaning up... 00:14:29.641 ************************************ 00:14:29.641 END TEST nvme_err_injection 00:14:29.641 ************************************ 00:14:29.641 00:14:29.641 real 0m0.320s 00:14:29.641 user 0m0.127s 00:14:29.641 sys 0m0.147s 00:14:29.641 13:33:41 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.641 13:33:41 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:14:29.641 13:33:41 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:14:29.641 13:33:41 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:14:29.641 13:33:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.641 13:33:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:29.641 ************************************ 00:14:29.641 START TEST nvme_overhead 00:14:29.641 ************************************ 00:14:29.641 13:33:41 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:14:31.017 Initializing NVMe Controllers 00:14:31.017 Attached to 0000:00:10.0 00:14:31.017 Attached to 0000:00:11.0 00:14:31.017 Attached to 0000:00:13.0 00:14:31.017 Attached to 0000:00:12.0 00:14:31.017 Initialization complete. Launching workers. 00:14:31.017 submit (in ns) avg, min, max = 14891.5, 10617.7, 102804.8 00:14:31.017 complete (in ns) avg, min, max = 9247.4, 7737.3, 106885.9 00:14:31.017 00:14:31.017 Submit histogram 00:14:31.017 ================ 00:14:31.017 Range in us Cumulative Count 00:14:31.017 10.590 - 10.641: 0.0148% ( 1) 00:14:31.017 11.206 - 11.258: 0.0297% ( 1) 00:14:31.017 11.309 - 11.361: 0.0445% ( 1) 00:14:31.017 12.132 - 12.183: 0.0593% ( 1) 00:14:31.017 12.337 - 12.389: 0.1038% ( 3) 00:14:31.017 12.389 - 12.440: 0.2077% ( 7) 00:14:31.017 12.440 - 12.492: 0.2966% ( 6) 00:14:31.017 12.492 - 12.543: 0.7861% ( 33) 00:14:31.017 12.543 - 12.594: 1.4832% ( 47) 00:14:31.017 12.594 - 12.646: 2.7885% ( 88) 00:14:31.017 12.646 - 12.697: 4.3756% ( 107) 00:14:31.017 12.697 - 12.749: 6.3186% ( 131) 00:14:31.017 12.749 - 12.800: 8.7956% ( 167) 00:14:31.017 12.800 - 12.851: 11.3764% ( 174) 00:14:31.017 12.851 - 12.903: 14.7879% ( 230) 00:14:31.017 12.903 - 12.954: 18.8223% ( 272) 00:14:31.017 12.954 - 13.006: 22.7974% ( 268) 00:14:31.017 13.006 - 13.057: 27.3064% ( 304) 00:14:31.017 13.057 - 13.108: 31.8451% ( 306) 00:14:31.017 13.108 - 13.160: 36.5470% ( 317) 00:14:31.017 13.160 - 13.263: 44.8383% ( 559) 00:14:31.017 13.263 - 13.365: 52.0024% ( 483) 00:14:31.017 13.365 - 13.468: 57.6239% ( 379) 00:14:31.017 13.468 - 13.571: 62.8300% ( 351) 00:14:31.017 13.571 - 13.674: 66.5678% ( 252) 00:14:31.017 13.674 - 13.777: 69.1783% ( 176) 00:14:31.017 13.777 - 13.880: 70.8395% ( 112) 00:14:31.017 13.880 - 13.982: 72.1596% ( 89) 00:14:31.017 13.982 - 14.085: 73.0199% ( 58) 00:14:31.017 14.085 - 14.188: 73.4500% ( 29) 00:14:31.017 14.188 - 14.291: 73.6873% ( 16) 00:14:31.017 14.291 - 14.394: 73.8505% ( 11) 00:14:31.017 14.394 - 14.496: 73.9840% ( 9) 00:14:31.017 14.496 - 14.599: 73.9988% ( 1) 00:14:31.017 14.599 - 14.702: 74.0730% ( 5) 00:14:31.017 14.702 - 14.805: 74.1916% ( 8) 00:14:31.017 14.805 - 14.908: 74.2065% ( 1) 00:14:31.017 15.010 - 15.113: 74.2213% ( 1) 00:14:31.017 15.113 - 15.216: 74.2806% ( 4) 00:14:31.017 15.216 - 15.319: 74.3251% ( 3) 00:14:31.017 15.319 - 15.422: 74.3845% ( 4) 00:14:31.017 15.422 - 15.524: 74.4141% ( 2) 00:14:31.017 15.524 - 15.627: 74.4438% ( 2) 00:14:31.017 15.627 - 15.730: 74.4735% ( 2) 00:14:31.017 15.730 - 15.833: 74.5031% ( 2) 00:14:31.017 15.936 - 16.039: 74.5179% ( 1) 00:14:31.017 16.039 - 16.141: 74.5328% ( 1) 00:14:31.017 16.141 - 16.244: 74.5476% ( 1) 00:14:31.017 16.244 - 16.347: 74.5624% ( 1) 00:14:31.017 16.347 - 16.450: 74.5921% ( 2) 00:14:31.017 16.450 - 16.553: 74.7108% ( 8) 00:14:31.017 16.553 - 16.655: 74.9036% ( 13) 00:14:31.017 16.655 - 16.758: 75.2744% ( 25) 00:14:31.017 16.758 - 16.861: 75.8380% ( 38) 00:14:31.017 16.861 - 16.964: 76.8021% ( 65) 00:14:31.017 16.964 - 17.067: 78.6117% ( 122) 00:14:31.017 17.067 - 17.169: 81.3853% ( 187) 00:14:31.017 17.169 - 17.272: 83.7140% ( 157) 00:14:31.017 17.272 - 17.375: 85.7312% ( 136) 00:14:31.017 17.375 - 17.478: 87.2293% ( 101) 00:14:31.017 17.478 - 17.581: 88.6681% ( 97) 00:14:31.017 17.581 - 17.684: 89.7656% ( 74) 00:14:31.017 17.684 - 17.786: 90.6111% ( 57) 00:14:31.017 17.786 - 17.889: 91.3527% ( 50) 00:14:31.017 17.889 - 17.992: 91.9163% ( 38) 00:14:31.017 17.992 - 18.095: 92.5690% ( 44) 00:14:31.017 18.095 - 18.198: 92.9398% ( 25) 00:14:31.017 18.198 - 18.300: 93.2068% ( 18) 00:14:31.017 18.300 - 18.403: 93.4441% ( 16) 00:14:31.017 18.403 - 18.506: 93.7111% ( 18) 00:14:31.017 18.506 - 18.609: 93.8446% ( 9) 00:14:31.017 18.609 - 18.712: 94.0225% ( 12) 00:14:31.017 18.712 - 18.814: 94.0819% ( 4) 00:14:31.017 18.814 - 18.917: 94.2599% ( 12) 00:14:31.017 18.917 - 19.020: 94.3637% ( 7) 00:14:31.017 19.020 - 19.123: 94.4972% ( 9) 00:14:31.017 19.123 - 19.226: 94.6010% ( 7) 00:14:31.017 19.226 - 19.329: 94.7493% ( 10) 00:14:31.017 19.329 - 19.431: 94.8235% ( 5) 00:14:31.017 19.431 - 19.534: 94.9570% ( 9) 00:14:31.017 19.534 - 19.637: 95.0015% ( 3) 00:14:31.017 19.637 - 19.740: 95.0163% ( 1) 00:14:31.017 19.740 - 19.843: 95.0756% ( 4) 00:14:31.017 19.843 - 19.945: 95.1201% ( 3) 00:14:31.017 19.945 - 20.048: 95.1943% ( 5) 00:14:31.017 20.048 - 20.151: 95.2091% ( 1) 00:14:31.017 20.151 - 20.254: 95.2536% ( 3) 00:14:31.017 20.254 - 20.357: 95.2833% ( 2) 00:14:31.017 20.357 - 20.459: 95.3278% ( 3) 00:14:31.017 20.459 - 20.562: 95.3723% ( 3) 00:14:31.017 20.562 - 20.665: 95.3871% ( 1) 00:14:31.017 20.665 - 20.768: 95.4020% ( 1) 00:14:31.017 20.768 - 20.871: 95.4316% ( 2) 00:14:31.017 20.871 - 20.973: 95.4465% ( 1) 00:14:31.017 20.973 - 21.076: 95.4761% ( 2) 00:14:31.017 21.076 - 21.179: 95.4910% ( 1) 00:14:31.017 21.179 - 21.282: 95.5206% ( 2) 00:14:31.017 21.282 - 21.385: 95.5354% ( 1) 00:14:31.017 21.385 - 21.488: 95.5651% ( 2) 00:14:31.017 21.488 - 21.590: 95.5948% ( 2) 00:14:31.017 21.590 - 21.693: 95.6244% ( 2) 00:14:31.017 21.693 - 21.796: 95.6393% ( 1) 00:14:31.017 21.899 - 22.002: 95.6541% ( 1) 00:14:31.017 22.310 - 22.413: 95.6838% ( 2) 00:14:31.017 22.516 - 22.618: 95.7134% ( 2) 00:14:31.017 22.721 - 22.824: 95.7431% ( 2) 00:14:31.017 22.824 - 22.927: 95.7876% ( 3) 00:14:31.017 22.927 - 23.030: 95.8024% ( 1) 00:14:31.017 23.030 - 23.133: 95.8321% ( 2) 00:14:31.017 23.133 - 23.235: 95.8766% ( 3) 00:14:31.017 23.235 - 23.338: 95.8914% ( 1) 00:14:31.017 23.338 - 23.441: 95.9211% ( 2) 00:14:31.017 23.441 - 23.544: 95.9804% ( 4) 00:14:31.017 23.544 - 23.647: 96.0249% ( 3) 00:14:31.017 23.647 - 23.749: 96.0546% ( 2) 00:14:31.017 23.749 - 23.852: 96.1287% ( 5) 00:14:31.017 23.852 - 23.955: 96.1436% ( 1) 00:14:31.017 23.955 - 24.058: 96.2326% ( 6) 00:14:31.017 24.058 - 24.161: 96.3216% ( 6) 00:14:31.017 24.161 - 24.263: 96.3661% ( 3) 00:14:31.018 24.263 - 24.366: 96.6034% ( 16) 00:14:31.018 24.366 - 24.469: 96.8407% ( 16) 00:14:31.018 24.469 - 24.572: 96.9297% ( 6) 00:14:31.018 24.572 - 24.675: 97.1077% ( 12) 00:14:31.018 24.675 - 24.778: 97.2263% ( 8) 00:14:31.018 24.778 - 24.880: 97.2708% ( 3) 00:14:31.018 24.880 - 24.983: 97.3005% ( 2) 00:14:31.018 24.983 - 25.086: 97.3747% ( 5) 00:14:31.018 25.086 - 25.189: 97.4488% ( 5) 00:14:31.018 25.189 - 25.292: 97.5082% ( 4) 00:14:31.018 25.292 - 25.394: 97.5378% ( 2) 00:14:31.018 25.394 - 25.497: 97.6268% ( 6) 00:14:31.018 25.497 - 25.600: 97.6713% ( 3) 00:14:31.018 25.600 - 25.703: 97.7306% ( 4) 00:14:31.018 25.703 - 25.806: 97.7455% ( 1) 00:14:31.018 25.806 - 25.908: 97.7900% ( 3) 00:14:31.018 25.908 - 26.011: 97.8196% ( 2) 00:14:31.018 26.011 - 26.114: 97.9086% ( 6) 00:14:31.018 26.114 - 26.217: 97.9383% ( 2) 00:14:31.018 26.217 - 26.320: 97.9680% ( 2) 00:14:31.018 26.320 - 26.525: 98.0125% ( 3) 00:14:31.018 26.731 - 26.937: 98.0273% ( 1) 00:14:31.018 26.937 - 27.142: 98.0421% ( 1) 00:14:31.018 27.142 - 27.348: 98.0718% ( 2) 00:14:31.018 27.348 - 27.553: 98.1163% ( 3) 00:14:31.018 27.759 - 27.965: 98.1311% ( 1) 00:14:31.018 27.965 - 28.170: 98.1608% ( 2) 00:14:31.018 28.582 - 28.787: 98.1756% ( 1) 00:14:31.018 28.787 - 28.993: 98.1904% ( 1) 00:14:31.018 28.993 - 29.198: 98.2053% ( 1) 00:14:31.018 29.404 - 29.610: 98.2646% ( 4) 00:14:31.018 29.610 - 29.815: 98.2794% ( 1) 00:14:31.018 29.815 - 30.021: 98.3091% ( 2) 00:14:31.018 30.021 - 30.227: 98.3239% ( 1) 00:14:31.018 30.227 - 30.432: 98.3684% ( 3) 00:14:31.018 30.432 - 30.638: 98.3833% ( 1) 00:14:31.018 30.638 - 30.843: 98.4129% ( 2) 00:14:31.018 31.049 - 31.255: 98.4426% ( 2) 00:14:31.018 31.255 - 31.460: 98.4723% ( 2) 00:14:31.018 31.460 - 31.666: 98.5019% ( 2) 00:14:31.018 31.666 - 31.871: 98.5168% ( 1) 00:14:31.018 31.871 - 32.077: 98.5316% ( 1) 00:14:31.018 32.900 - 33.105: 98.5464% ( 1) 00:14:31.018 33.105 - 33.311: 98.5613% ( 1) 00:14:31.018 34.133 - 34.339: 98.5761% ( 1) 00:14:31.018 34.545 - 34.750: 98.5909% ( 1) 00:14:31.018 35.367 - 35.573: 98.6058% ( 1) 00:14:31.018 35.573 - 35.778: 98.6206% ( 1) 00:14:31.018 36.601 - 36.806: 98.6354% ( 1) 00:14:31.018 36.806 - 37.012: 98.6503% ( 1) 00:14:31.018 37.012 - 37.218: 98.6651% ( 1) 00:14:31.018 37.218 - 37.423: 98.8282% ( 11) 00:14:31.018 37.423 - 37.629: 99.1991% ( 25) 00:14:31.018 37.629 - 37.835: 99.5254% ( 22) 00:14:31.018 37.835 - 38.040: 99.6737% ( 10) 00:14:31.018 38.040 - 38.246: 99.7182% ( 3) 00:14:31.018 38.657 - 38.863: 99.7478% ( 2) 00:14:31.018 39.068 - 39.274: 99.7627% ( 1) 00:14:31.018 39.274 - 39.480: 99.7775% ( 1) 00:14:31.018 39.891 - 40.096: 99.7923% ( 1) 00:14:31.018 42.975 - 43.181: 99.8072% ( 1) 00:14:31.018 45.443 - 45.648: 99.8220% ( 1) 00:14:31.018 45.854 - 46.059: 99.8368% ( 1) 00:14:31.018 49.966 - 50.172: 99.8517% ( 1) 00:14:31.018 50.378 - 50.583: 99.8665% ( 1) 00:14:31.018 50.789 - 50.994: 99.8813% ( 1) 00:14:31.018 55.107 - 55.518: 99.9110% ( 2) 00:14:31.018 56.752 - 57.163: 99.9258% ( 1) 00:14:31.018 57.163 - 57.574: 99.9407% ( 1) 00:14:31.018 60.864 - 61.276: 99.9555% ( 1) 00:14:31.018 71.145 - 71.557: 99.9703% ( 1) 00:14:31.018 99.521 - 99.933: 99.9852% ( 1) 00:14:31.018 102.400 - 102.811: 100.0000% ( 1) 00:14:31.018 00:14:31.018 Complete histogram 00:14:31.018 ================== 00:14:31.018 Range in us Cumulative Count 00:14:31.018 7.711 - 7.762: 0.1038% ( 7) 00:14:31.018 7.762 - 7.814: 1.2311% ( 76) 00:14:31.018 7.814 - 7.865: 6.7191% ( 370) 00:14:31.018 7.865 - 7.916: 16.7013% ( 673) 00:14:31.018 7.916 - 7.968: 26.6538% ( 671) 00:14:31.018 7.968 - 8.019: 36.1465% ( 640) 00:14:31.018 8.019 - 8.071: 43.6814% ( 508) 00:14:31.018 8.071 - 8.122: 49.3177% ( 380) 00:14:31.018 8.122 - 8.173: 53.6339% ( 291) 00:14:31.018 8.173 - 8.225: 56.4669% ( 191) 00:14:31.018 8.225 - 8.276: 59.3741% ( 196) 00:14:31.018 8.276 - 8.328: 62.1329% ( 186) 00:14:31.018 8.328 - 8.379: 64.4319% ( 155) 00:14:31.018 8.379 - 8.431: 66.5233% ( 141) 00:14:31.018 8.431 - 8.482: 68.5257% ( 135) 00:14:31.018 8.482 - 8.533: 69.7419% ( 82) 00:14:31.018 8.533 - 8.585: 71.0472% ( 88) 00:14:31.018 8.585 - 8.636: 72.1299% ( 73) 00:14:31.018 8.636 - 8.688: 72.8567% ( 49) 00:14:31.018 8.688 - 8.739: 73.4945% ( 43) 00:14:31.018 8.739 - 8.790: 74.0581% ( 38) 00:14:31.018 8.790 - 8.842: 74.4735% ( 28) 00:14:31.018 8.842 - 8.893: 74.7701% ( 20) 00:14:31.018 8.893 - 8.945: 75.0667% ( 20) 00:14:31.018 8.945 - 8.996: 75.2892% ( 15) 00:14:31.018 8.996 - 9.047: 75.4524% ( 11) 00:14:31.018 9.047 - 9.099: 75.6452% ( 13) 00:14:31.018 9.099 - 9.150: 75.7639% ( 8) 00:14:31.018 9.150 - 9.202: 75.8232% ( 4) 00:14:31.018 9.202 - 9.253: 75.9122% ( 6) 00:14:31.018 9.253 - 9.304: 76.0902% ( 12) 00:14:31.018 9.304 - 9.356: 76.1643% ( 5) 00:14:31.018 9.356 - 9.407: 76.2088% ( 3) 00:14:31.018 9.407 - 9.459: 76.2533% ( 3) 00:14:31.018 9.459 - 9.510: 76.2978% ( 3) 00:14:31.018 9.510 - 9.561: 76.3275% ( 2) 00:14:31.018 9.561 - 9.613: 76.3423% ( 1) 00:14:31.018 9.613 - 9.664: 76.3572% ( 1) 00:14:31.018 9.664 - 9.716: 76.3720% ( 1) 00:14:31.018 9.716 - 9.767: 76.4165% ( 3) 00:14:31.018 9.818 - 9.870: 76.4610% ( 3) 00:14:31.018 9.870 - 9.921: 76.5055% ( 3) 00:14:31.018 9.921 - 9.973: 76.5352% ( 2) 00:14:31.018 9.973 - 10.024: 76.5648% ( 2) 00:14:31.018 10.024 - 10.076: 76.5945% ( 2) 00:14:31.018 10.076 - 10.127: 76.6093% ( 1) 00:14:31.018 10.127 - 10.178: 76.6538% ( 3) 00:14:31.018 10.178 - 10.230: 76.8170% ( 11) 00:14:31.018 10.230 - 10.281: 77.2026% ( 26) 00:14:31.018 10.281 - 10.333: 77.6772% ( 32) 00:14:31.018 10.333 - 10.384: 78.1964% ( 35) 00:14:31.018 10.384 - 10.435: 78.7155% ( 35) 00:14:31.018 10.435 - 10.487: 79.2643% ( 37) 00:14:31.018 10.487 - 10.538: 79.7093% ( 30) 00:14:31.018 10.538 - 10.590: 80.4954% ( 53) 00:14:31.018 10.590 - 10.641: 81.4150% ( 62) 00:14:31.018 10.641 - 10.692: 82.4088% ( 67) 00:14:31.018 10.692 - 10.744: 83.4915% ( 73) 00:14:31.018 10.744 - 10.795: 84.4705% ( 66) 00:14:31.018 10.795 - 10.847: 85.4939% ( 69) 00:14:31.018 10.847 - 10.898: 86.5322% ( 70) 00:14:31.018 10.898 - 10.949: 87.5408% ( 68) 00:14:31.018 10.949 - 11.001: 88.6236% ( 73) 00:14:31.018 11.001 - 11.052: 89.5877% ( 65) 00:14:31.018 11.052 - 11.104: 90.5666% ( 66) 00:14:31.018 11.104 - 11.155: 91.4120% ( 57) 00:14:31.018 11.155 - 11.206: 92.0647% ( 44) 00:14:31.018 11.206 - 11.258: 92.4355% ( 25) 00:14:31.018 11.258 - 11.309: 92.9101% ( 32) 00:14:31.018 11.309 - 11.361: 93.2364% ( 22) 00:14:31.018 11.361 - 11.412: 93.5479% ( 21) 00:14:31.018 11.412 - 11.463: 93.8149% ( 18) 00:14:31.018 11.463 - 11.515: 94.0077% ( 13) 00:14:31.018 11.515 - 11.566: 94.1709% ( 11) 00:14:31.018 11.566 - 11.618: 94.2895% ( 8) 00:14:31.018 11.618 - 11.669: 94.4082% ( 8) 00:14:31.018 11.669 - 11.720: 94.5120% ( 7) 00:14:31.018 11.720 - 11.772: 94.5565% ( 3) 00:14:31.018 11.772 - 11.823: 94.6307% ( 5) 00:14:31.019 11.823 - 11.875: 94.7048% ( 5) 00:14:31.019 11.875 - 11.926: 94.7197% ( 1) 00:14:31.019 11.926 - 11.978: 94.7493% ( 2) 00:14:31.019 11.978 - 12.029: 94.7642% ( 1) 00:14:31.019 12.029 - 12.080: 94.7790% ( 1) 00:14:31.019 12.132 - 12.183: 94.8087% ( 2) 00:14:31.019 12.183 - 12.235: 94.8532% ( 3) 00:14:31.019 12.235 - 12.286: 94.8828% ( 2) 00:14:31.019 12.337 - 12.389: 94.9125% ( 2) 00:14:31.019 12.492 - 12.543: 94.9273% ( 1) 00:14:31.019 12.543 - 12.594: 94.9570% ( 2) 00:14:31.019 12.594 - 12.646: 94.9718% ( 1) 00:14:31.019 12.697 - 12.749: 95.0015% ( 2) 00:14:31.019 12.749 - 12.800: 95.0163% ( 1) 00:14:31.019 12.800 - 12.851: 95.0460% ( 2) 00:14:31.019 12.851 - 12.903: 95.0756% ( 2) 00:14:31.019 12.954 - 13.006: 95.1053% ( 2) 00:14:31.019 13.057 - 13.108: 95.1646% ( 4) 00:14:31.019 13.108 - 13.160: 95.1943% ( 2) 00:14:31.019 13.160 - 13.263: 95.2536% ( 4) 00:14:31.019 13.263 - 13.365: 95.3426% ( 6) 00:14:31.019 13.365 - 13.468: 95.4910% ( 10) 00:14:31.019 13.468 - 13.571: 95.5948% ( 7) 00:14:31.019 13.571 - 13.674: 95.6244% ( 2) 00:14:31.019 13.674 - 13.777: 95.7431% ( 8) 00:14:31.019 13.777 - 13.880: 95.8024% ( 4) 00:14:31.019 13.880 - 13.982: 95.8618% ( 4) 00:14:31.019 13.982 - 14.085: 95.9211% ( 4) 00:14:31.019 14.085 - 14.188: 96.0249% ( 7) 00:14:31.019 14.188 - 14.291: 96.0842% ( 4) 00:14:31.019 14.291 - 14.394: 96.0991% ( 1) 00:14:31.019 14.394 - 14.496: 96.1732% ( 5) 00:14:31.019 14.496 - 14.599: 96.1881% ( 1) 00:14:31.019 14.599 - 14.702: 96.2474% ( 4) 00:14:31.019 14.702 - 14.805: 96.2919% ( 3) 00:14:31.019 14.805 - 14.908: 96.3364% ( 3) 00:14:31.019 15.010 - 15.113: 96.3661% ( 2) 00:14:31.019 15.113 - 15.216: 96.4106% ( 3) 00:14:31.019 15.216 - 15.319: 96.4254% ( 1) 00:14:31.019 15.319 - 15.422: 96.4402% ( 1) 00:14:31.019 15.422 - 15.524: 96.4551% ( 1) 00:14:31.019 15.627 - 15.730: 96.4699% ( 1) 00:14:31.019 15.730 - 15.833: 96.4996% ( 2) 00:14:31.019 16.244 - 16.347: 96.5144% ( 1) 00:14:31.019 16.347 - 16.450: 96.5292% ( 1) 00:14:31.019 16.450 - 16.553: 96.6775% ( 10) 00:14:31.019 16.553 - 16.655: 97.1225% ( 30) 00:14:31.019 16.655 - 16.758: 97.2560% ( 9) 00:14:31.019 16.758 - 16.861: 97.3302% ( 5) 00:14:31.019 16.861 - 16.964: 97.3895% ( 4) 00:14:31.019 16.964 - 17.067: 97.4933% ( 7) 00:14:31.019 17.067 - 17.169: 97.5082% ( 1) 00:14:31.019 17.169 - 17.272: 97.5378% ( 2) 00:14:31.019 17.478 - 17.581: 97.5527% ( 1) 00:14:31.019 17.786 - 17.889: 97.5675% ( 1) 00:14:31.019 18.300 - 18.403: 97.5823% ( 1) 00:14:31.019 18.609 - 18.712: 97.6120% ( 2) 00:14:31.019 18.712 - 18.814: 97.6268% ( 1) 00:14:31.019 18.814 - 18.917: 97.6416% ( 1) 00:14:31.019 18.917 - 19.020: 97.7010% ( 4) 00:14:31.019 19.020 - 19.123: 97.7603% ( 4) 00:14:31.019 19.226 - 19.329: 97.7900% ( 2) 00:14:31.019 19.329 - 19.431: 97.8493% ( 4) 00:14:31.019 19.431 - 19.534: 97.8641% ( 1) 00:14:31.019 19.534 - 19.637: 97.9086% ( 3) 00:14:31.019 19.637 - 19.740: 97.9383% ( 2) 00:14:31.019 19.843 - 19.945: 97.9680% ( 2) 00:14:31.019 19.945 - 20.048: 97.9976% ( 2) 00:14:31.019 20.048 - 20.151: 98.0273% ( 2) 00:14:31.019 20.254 - 20.357: 98.0718% ( 3) 00:14:31.019 20.357 - 20.459: 98.1163% ( 3) 00:14:31.019 20.459 - 20.562: 98.2053% ( 6) 00:14:31.019 20.562 - 20.665: 98.2201% ( 1) 00:14:31.019 20.665 - 20.768: 98.2349% ( 1) 00:14:31.019 20.768 - 20.871: 98.2646% ( 2) 00:14:31.019 20.871 - 20.973: 98.3091% ( 3) 00:14:31.019 20.973 - 21.076: 98.3239% ( 1) 00:14:31.019 21.282 - 21.385: 98.3388% ( 1) 00:14:31.019 21.488 - 21.590: 98.3536% ( 1) 00:14:31.019 22.413 - 22.516: 98.3833% ( 2) 00:14:31.019 22.721 - 22.824: 98.3981% ( 1) 00:14:31.019 22.824 - 22.927: 98.4129% ( 1) 00:14:31.019 23.030 - 23.133: 98.4278% ( 1) 00:14:31.019 23.235 - 23.338: 98.4574% ( 2) 00:14:31.019 23.338 - 23.441: 98.4723% ( 1) 00:14:31.019 23.441 - 23.544: 98.4871% ( 1) 00:14:31.019 23.852 - 23.955: 98.5019% ( 1) 00:14:31.019 23.955 - 24.058: 98.5316% ( 2) 00:14:31.019 24.366 - 24.469: 98.5464% ( 1) 00:14:31.019 24.469 - 24.572: 98.5761% ( 2) 00:14:31.019 24.572 - 24.675: 98.5909% ( 1) 00:14:31.019 24.675 - 24.778: 98.6058% ( 1) 00:14:31.019 25.189 - 25.292: 98.6206% ( 1) 00:14:31.019 25.908 - 26.011: 98.6354% ( 1) 00:14:31.019 26.011 - 26.114: 98.7244% ( 6) 00:14:31.019 26.114 - 26.217: 98.9024% ( 12) 00:14:31.019 26.217 - 26.320: 99.2435% ( 23) 00:14:31.019 26.320 - 26.525: 99.5550% ( 21) 00:14:31.019 26.525 - 26.731: 99.5995% ( 3) 00:14:31.019 26.731 - 26.937: 99.7182% ( 8) 00:14:31.019 26.937 - 27.142: 99.7627% ( 3) 00:14:31.019 27.142 - 27.348: 99.7775% ( 1) 00:14:31.019 28.993 - 29.198: 99.7923% ( 1) 00:14:31.019 29.815 - 30.021: 99.8072% ( 1) 00:14:31.019 31.049 - 31.255: 99.8220% ( 1) 00:14:31.019 36.190 - 36.395: 99.8517% ( 2) 00:14:31.019 36.601 - 36.806: 99.8665% ( 1) 00:14:31.019 37.835 - 38.040: 99.8813% ( 1) 00:14:31.019 44.209 - 44.414: 99.8962% ( 1) 00:14:31.019 44.414 - 44.620: 99.9110% ( 1) 00:14:31.019 47.704 - 47.910: 99.9258% ( 1) 00:14:31.019 49.761 - 49.966: 99.9407% ( 1) 00:14:31.019 52.434 - 52.639: 99.9555% ( 1) 00:14:31.019 67.855 - 68.267: 99.9703% ( 1) 00:14:31.019 81.838 - 82.249: 99.9852% ( 1) 00:14:31.019 106.101 - 106.924: 100.0000% ( 1) 00:14:31.019 00:14:31.019 ************************************ 00:14:31.019 END TEST nvme_overhead 00:14:31.019 ************************************ 00:14:31.019 00:14:31.019 real 0m1.335s 00:14:31.019 user 0m1.106s 00:14:31.019 sys 0m0.171s 00:14:31.019 13:33:42 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.019 13:33:42 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:14:31.019 13:33:42 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:14:31.019 13:33:42 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:14:31.019 13:33:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.019 13:33:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:31.019 ************************************ 00:14:31.019 START TEST nvme_arbitration 00:14:31.019 ************************************ 00:14:31.019 13:33:42 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:14:35.258 Initializing NVMe Controllers 00:14:35.258 Attached to 0000:00:10.0 00:14:35.258 Attached to 0000:00:11.0 00:14:35.258 Attached to 0000:00:13.0 00:14:35.258 Attached to 0000:00:12.0 00:14:35.258 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:14:35.258 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:14:35.258 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:14:35.258 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:14:35.258 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:14:35.258 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:14:35.258 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:14:35.258 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:14:35.258 Initialization complete. Launching workers. 00:14:35.258 Starting thread on core 1 with urgent priority queue 00:14:35.258 Starting thread on core 2 with urgent priority queue 00:14:35.258 Starting thread on core 3 with urgent priority queue 00:14:35.258 Starting thread on core 0 with urgent priority queue 00:14:35.258 QEMU NVMe Ctrl (12340 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:14:35.258 QEMU NVMe Ctrl (12342 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:14:35.258 QEMU NVMe Ctrl (12341 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:14:35.258 QEMU NVMe Ctrl (12342 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:14:35.258 QEMU NVMe Ctrl (12343 ) core 2: 512.00 IO/s 195.31 secs/100000 ios 00:14:35.258 QEMU NVMe Ctrl (12342 ) core 3: 554.67 IO/s 180.29 secs/100000 ios 00:14:35.258 ======================================================== 00:14:35.258 00:14:35.258 00:14:35.258 real 0m3.416s 00:14:35.258 user 0m9.326s 00:14:35.258 sys 0m0.176s 00:14:35.258 13:33:46 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.258 ************************************ 00:14:35.258 END TEST nvme_arbitration 00:14:35.258 ************************************ 00:14:35.258 13:33:46 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:14:35.258 13:33:46 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:14:35.258 13:33:46 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:35.258 13:33:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.258 13:33:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:35.258 ************************************ 00:14:35.258 START TEST nvme_single_aen 00:14:35.258 ************************************ 00:14:35.258 13:33:46 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:14:35.258 Asynchronous Event Request test 00:14:35.258 Attached to 0000:00:10.0 00:14:35.258 Attached to 0000:00:11.0 00:14:35.259 Attached to 0000:00:13.0 00:14:35.259 Attached to 0000:00:12.0 00:14:35.259 Reset controller to setup AER completions for this process 00:14:35.259 Registering asynchronous event callbacks... 00:14:35.259 Getting orig temperature thresholds of all controllers 00:14:35.259 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:35.259 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:35.259 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:35.259 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:35.259 Setting all controllers temperature threshold low to trigger AER 00:14:35.259 Waiting for all controllers temperature threshold to be set lower 00:14:35.259 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:35.259 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:14:35.259 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:35.259 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:14:35.259 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:35.259 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:14:35.259 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:35.259 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:14:35.259 Waiting for all controllers to trigger AER and reset threshold 00:14:35.259 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:35.259 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:35.259 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:35.259 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:35.259 Cleaning up... 00:14:35.259 00:14:35.259 real 0m0.307s 00:14:35.259 user 0m0.106s 00:14:35.259 sys 0m0.154s 00:14:35.259 ************************************ 00:14:35.259 END TEST nvme_single_aen 00:14:35.259 ************************************ 00:14:35.259 13:33:46 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.259 13:33:46 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:14:35.259 13:33:46 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:14:35.259 13:33:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:35.259 13:33:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.259 13:33:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:35.259 ************************************ 00:14:35.259 START TEST nvme_doorbell_aers 00:14:35.259 ************************************ 00:14:35.259 13:33:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:14:35.259 13:33:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:14:35.259 13:33:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:14:35.259 13:33:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:14:35.259 13:33:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:14:35.259 13:33:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:35.259 13:33:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:14:35.259 13:33:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:35.259 13:33:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:35.259 13:33:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:35.259 13:33:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:35.259 13:33:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:35.259 13:33:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:35.259 13:33:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:35.518 [2024-11-20 13:33:47.273003] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64732) is not found. Dropping the request. 00:14:45.497 Executing: test_write_invalid_db 00:14:45.497 Waiting for AER completion... 00:14:45.498 Failure: test_write_invalid_db 00:14:45.498 00:14:45.498 Executing: test_invalid_db_write_overflow_sq 00:14:45.498 Waiting for AER completion... 00:14:45.498 Failure: test_invalid_db_write_overflow_sq 00:14:45.498 00:14:45.498 Executing: test_invalid_db_write_overflow_cq 00:14:45.498 Waiting for AER completion... 00:14:45.498 Failure: test_invalid_db_write_overflow_cq 00:14:45.498 00:14:45.498 13:33:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:45.498 13:33:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:14:45.498 [2024-11-20 13:33:57.313717] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64732) is not found. Dropping the request. 00:14:55.480 Executing: test_write_invalid_db 00:14:55.480 Waiting for AER completion... 00:14:55.480 Failure: test_write_invalid_db 00:14:55.480 00:14:55.480 Executing: test_invalid_db_write_overflow_sq 00:14:55.480 Waiting for AER completion... 00:14:55.480 Failure: test_invalid_db_write_overflow_sq 00:14:55.480 00:14:55.480 Executing: test_invalid_db_write_overflow_cq 00:14:55.480 Waiting for AER completion... 00:14:55.480 Failure: test_invalid_db_write_overflow_cq 00:14:55.480 00:14:55.480 13:34:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:55.480 13:34:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:14:55.480 [2024-11-20 13:34:07.417298] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64732) is not found. Dropping the request. 00:15:05.460 Executing: test_write_invalid_db 00:15:05.460 Waiting for AER completion... 00:15:05.460 Failure: test_write_invalid_db 00:15:05.460 00:15:05.460 Executing: test_invalid_db_write_overflow_sq 00:15:05.460 Waiting for AER completion... 00:15:05.460 Failure: test_invalid_db_write_overflow_sq 00:15:05.460 00:15:05.460 Executing: test_invalid_db_write_overflow_cq 00:15:05.460 Waiting for AER completion... 00:15:05.460 Failure: test_invalid_db_write_overflow_cq 00:15:05.460 00:15:05.460 13:34:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:15:05.460 13:34:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:05.719 [2024-11-20 13:34:17.438479] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64732) is not found. Dropping the request. 00:15:15.710 Executing: test_write_invalid_db 00:15:15.710 Waiting for AER completion... 00:15:15.710 Failure: test_write_invalid_db 00:15:15.710 00:15:15.710 Executing: test_invalid_db_write_overflow_sq 00:15:15.710 Waiting for AER completion... 00:15:15.710 Failure: test_invalid_db_write_overflow_sq 00:15:15.710 00:15:15.710 Executing: test_invalid_db_write_overflow_cq 00:15:15.710 Waiting for AER completion... 00:15:15.710 Failure: test_invalid_db_write_overflow_cq 00:15:15.710 00:15:15.710 ************************************ 00:15:15.710 END TEST nvme_doorbell_aers 00:15:15.710 ************************************ 00:15:15.710 00:15:15.710 real 0m40.342s 00:15:15.710 user 0m28.338s 00:15:15.710 sys 0m11.620s 00:15:15.710 13:34:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.710 13:34:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:15:15.710 13:34:27 nvme -- nvme/nvme.sh@97 -- # uname 00:15:15.710 13:34:27 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:15:15.710 13:34:27 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:15:15.710 13:34:27 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:15:15.710 13:34:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.710 13:34:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:15.710 ************************************ 00:15:15.710 START TEST nvme_multi_aen 00:15:15.710 ************************************ 00:15:15.710 13:34:27 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:15:15.710 [2024-11-20 13:34:27.520999] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64732) is not found. Dropping the request. 00:15:15.710 [2024-11-20 13:34:27.521099] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64732) is not found. Dropping the request. 00:15:15.710 [2024-11-20 13:34:27.521117] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64732) is not found. Dropping the request. 00:15:15.710 [2024-11-20 13:34:27.522863] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64732) is not found. Dropping the request. 00:15:15.710 [2024-11-20 13:34:27.523037] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64732) is not found. Dropping the request. 00:15:15.710 [2024-11-20 13:34:27.523144] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64732) is not found. Dropping the request. 00:15:15.710 [2024-11-20 13:34:27.524577] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64732) is not found. Dropping the request. 00:15:15.710 [2024-11-20 13:34:27.524730] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64732) is not found. Dropping the request. 00:15:15.710 [2024-11-20 13:34:27.524750] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64732) is not found. Dropping the request. 00:15:15.710 [2024-11-20 13:34:27.526087] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64732) is not found. Dropping the request. 00:15:15.710 [2024-11-20 13:34:27.526253] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64732) is not found. Dropping the request. 00:15:15.710 [2024-11-20 13:34:27.526356] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64732) is not found. Dropping the request. 00:15:15.710 Child process pid: 65253 00:15:15.968 [Child] Asynchronous Event Request test 00:15:15.968 [Child] Attached to 0000:00:10.0 00:15:15.968 [Child] Attached to 0000:00:11.0 00:15:15.968 [Child] Attached to 0000:00:13.0 00:15:15.968 [Child] Attached to 0000:00:12.0 00:15:15.968 [Child] Registering asynchronous event callbacks... 00:15:15.968 [Child] Getting orig temperature thresholds of all controllers 00:15:15.968 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:15.968 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:15.968 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:15.968 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:15.968 [Child] Waiting for all controllers to trigger AER and reset threshold 00:15:15.968 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:15.968 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:15.968 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:15.968 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:15.968 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:15.968 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:15.968 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:15.968 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:15.968 [Child] Cleaning up... 00:15:16.228 Asynchronous Event Request test 00:15:16.228 Attached to 0000:00:10.0 00:15:16.228 Attached to 0000:00:11.0 00:15:16.228 Attached to 0000:00:13.0 00:15:16.228 Attached to 0000:00:12.0 00:15:16.228 Reset controller to setup AER completions for this process 00:15:16.228 Registering asynchronous event callbacks... 00:15:16.228 Getting orig temperature thresholds of all controllers 00:15:16.228 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:16.228 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:16.228 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:16.228 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:15:16.228 Setting all controllers temperature threshold low to trigger AER 00:15:16.228 Waiting for all controllers temperature threshold to be set lower 00:15:16.228 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:16.228 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:15:16.228 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:16.228 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:15:16.228 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:16.228 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:15:16.228 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:15:16.228 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:15:16.228 Waiting for all controllers to trigger AER and reset threshold 00:15:16.228 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:16.228 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:16.228 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:16.228 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:15:16.228 Cleaning up... 00:15:16.228 ************************************ 00:15:16.228 END TEST nvme_multi_aen 00:15:16.228 ************************************ 00:15:16.228 00:15:16.228 real 0m0.726s 00:15:16.228 user 0m0.307s 00:15:16.228 sys 0m0.307s 00:15:16.228 13:34:27 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.228 13:34:27 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:15:16.228 13:34:28 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:15:16.228 13:34:28 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:16.228 13:34:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.228 13:34:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:16.228 ************************************ 00:15:16.228 START TEST nvme_startup 00:15:16.228 ************************************ 00:15:16.228 13:34:28 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:15:16.488 Initializing NVMe Controllers 00:15:16.488 Attached to 0000:00:10.0 00:15:16.488 Attached to 0000:00:11.0 00:15:16.488 Attached to 0000:00:13.0 00:15:16.488 Attached to 0000:00:12.0 00:15:16.488 Initialization complete. 00:15:16.488 Time used:196237.359 (us). 00:15:16.488 00:15:16.488 real 0m0.303s 00:15:16.488 user 0m0.107s 00:15:16.488 sys 0m0.145s 00:15:16.488 ************************************ 00:15:16.488 END TEST nvme_startup 00:15:16.488 ************************************ 00:15:16.488 13:34:28 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.488 13:34:28 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:15:16.488 13:34:28 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:15:16.488 13:34:28 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:16.488 13:34:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.488 13:34:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:16.488 ************************************ 00:15:16.488 START TEST nvme_multi_secondary 00:15:16.488 ************************************ 00:15:16.488 13:34:28 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:15:16.488 13:34:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65309 00:15:16.488 13:34:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:15:16.488 13:34:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65310 00:15:16.488 13:34:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:15:16.488 13:34:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:15:20.677 Initializing NVMe Controllers 00:15:20.677 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:20.677 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:20.677 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:20.677 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:20.677 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:15:20.677 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:15:20.677 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:15:20.677 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:15:20.677 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:15:20.677 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:15:20.677 Initialization complete. Launching workers. 00:15:20.677 ======================================================== 00:15:20.677 Latency(us) 00:15:20.677 Device Information : IOPS MiB/s Average min max 00:15:20.677 PCIE (0000:00:10.0) NSID 1 from core 2: 3116.99 12.18 5131.03 1391.13 14242.66 00:15:20.677 PCIE (0000:00:11.0) NSID 1 from core 2: 3116.99 12.18 5133.04 1220.16 14243.11 00:15:20.677 PCIE (0000:00:13.0) NSID 1 from core 2: 3116.99 12.18 5133.07 1288.17 18592.54 00:15:20.677 PCIE (0000:00:12.0) NSID 1 from core 2: 3116.99 12.18 5132.94 1293.39 14216.66 00:15:20.677 PCIE (0000:00:12.0) NSID 2 from core 2: 3116.99 12.18 5140.05 1253.80 15496.77 00:15:20.677 PCIE (0000:00:12.0) NSID 3 from core 2: 3116.99 12.18 5140.10 1386.59 14194.14 00:15:20.677 ======================================================== 00:15:20.677 Total : 18701.92 73.05 5135.04 1220.16 18592.54 00:15:20.677 00:15:20.677 13:34:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65309 00:15:20.677 Initializing NVMe Controllers 00:15:20.677 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:20.677 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:20.677 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:20.677 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:20.677 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:15:20.677 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:15:20.677 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:15:20.677 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:15:20.677 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:15:20.677 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:15:20.677 Initialization complete. Launching workers. 00:15:20.677 ======================================================== 00:15:20.677 Latency(us) 00:15:20.677 Device Information : IOPS MiB/s Average min max 00:15:20.677 PCIE (0000:00:10.0) NSID 1 from core 1: 4945.40 19.32 3232.82 1519.38 13981.76 00:15:20.677 PCIE (0000:00:11.0) NSID 1 from core 1: 4945.40 19.32 3234.72 1452.56 13530.45 00:15:20.677 PCIE (0000:00:13.0) NSID 1 from core 1: 4945.40 19.32 3234.72 1454.19 13585.15 00:15:20.677 PCIE (0000:00:12.0) NSID 1 from core 1: 4945.40 19.32 3234.89 1469.79 13725.98 00:15:20.677 PCIE (0000:00:12.0) NSID 2 from core 1: 4945.40 19.32 3234.88 1452.90 14007.40 00:15:20.677 PCIE (0000:00:12.0) NSID 3 from core 1: 4945.40 19.32 3234.87 1373.89 13979.43 00:15:20.677 ======================================================== 00:15:20.677 Total : 29672.40 115.91 3234.48 1373.89 14007.40 00:15:20.677 00:15:22.052 Initializing NVMe Controllers 00:15:22.052 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:22.052 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:22.052 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:22.052 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:22.052 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:22.052 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:15:22.052 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:15:22.052 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:15:22.052 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:15:22.052 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:15:22.052 Initialization complete. Launching workers. 00:15:22.052 ======================================================== 00:15:22.052 Latency(us) 00:15:22.052 Device Information : IOPS MiB/s Average min max 00:15:22.052 PCIE (0000:00:10.0) NSID 1 from core 0: 8238.17 32.18 1940.57 921.44 12262.89 00:15:22.052 PCIE (0000:00:11.0) NSID 1 from core 0: 8238.17 32.18 1941.72 940.84 13236.55 00:15:22.052 PCIE (0000:00:13.0) NSID 1 from core 0: 8238.17 32.18 1941.69 930.38 13561.87 00:15:22.052 PCIE (0000:00:12.0) NSID 1 from core 0: 8238.17 32.18 1941.68 856.78 13519.77 00:15:22.052 PCIE (0000:00:12.0) NSID 2 from core 0: 8238.17 32.18 1941.65 832.62 12769.71 00:15:22.052 PCIE (0000:00:12.0) NSID 3 from core 0: 8241.37 32.19 1940.88 806.77 12366.08 00:15:22.052 ======================================================== 00:15:22.052 Total : 49432.24 193.09 1941.37 806.77 13561.87 00:15:22.052 00:15:22.052 13:34:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65310 00:15:22.052 13:34:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65379 00:15:22.052 13:34:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:15:22.052 13:34:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65380 00:15:22.052 13:34:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:15:22.052 13:34:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:15:25.335 Initializing NVMe Controllers 00:15:25.335 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:25.335 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:25.335 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:25.335 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:25.335 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:15:25.335 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:15:25.335 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:15:25.335 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:15:25.335 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:15:25.335 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:15:25.335 Initialization complete. Launching workers. 00:15:25.335 ======================================================== 00:15:25.335 Latency(us) 00:15:25.335 Device Information : IOPS MiB/s Average min max 00:15:25.335 PCIE (0000:00:10.0) NSID 1 from core 1: 4951.67 19.34 3228.99 1156.48 8674.83 00:15:25.335 PCIE (0000:00:11.0) NSID 1 from core 1: 4951.67 19.34 3231.45 1197.21 8168.39 00:15:25.335 PCIE (0000:00:13.0) NSID 1 from core 1: 4951.67 19.34 3232.19 1196.15 8146.46 00:15:25.335 PCIE (0000:00:12.0) NSID 1 from core 1: 4951.67 19.34 3232.79 1198.49 7958.10 00:15:25.335 PCIE (0000:00:12.0) NSID 2 from core 1: 4951.67 19.34 3233.20 1181.12 7770.34 00:15:25.335 PCIE (0000:00:12.0) NSID 3 from core 1: 4951.67 19.34 3233.59 1191.81 7984.63 00:15:25.335 ======================================================== 00:15:25.335 Total : 29710.01 116.05 3232.03 1156.48 8674.83 00:15:25.335 00:15:25.594 Initializing NVMe Controllers 00:15:25.594 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:25.594 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:25.594 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:25.594 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:25.594 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:25.594 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:15:25.594 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:15:25.594 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:15:25.594 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:15:25.594 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:15:25.594 Initialization complete. Launching workers. 00:15:25.594 ======================================================== 00:15:25.594 Latency(us) 00:15:25.594 Device Information : IOPS MiB/s Average min max 00:15:25.594 PCIE (0000:00:10.0) NSID 1 from core 0: 4727.21 18.47 3382.19 1045.33 13214.99 00:15:25.594 PCIE (0000:00:11.0) NSID 1 from core 0: 4727.21 18.47 3384.22 1064.17 12123.42 00:15:25.594 PCIE (0000:00:13.0) NSID 1 from core 0: 4727.21 18.47 3384.42 1077.26 12291.02 00:15:25.594 PCIE (0000:00:12.0) NSID 1 from core 0: 4727.21 18.47 3384.58 1102.53 12412.04 00:15:25.594 PCIE (0000:00:12.0) NSID 2 from core 0: 4727.21 18.47 3384.76 1068.03 12590.33 00:15:25.594 PCIE (0000:00:12.0) NSID 3 from core 0: 4727.21 18.47 3384.93 1057.88 12937.58 00:15:25.594 ======================================================== 00:15:25.594 Total : 28363.25 110.79 3384.18 1045.33 13214.99 00:15:25.594 00:15:28.159 Initializing NVMe Controllers 00:15:28.159 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:28.159 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:15:28.159 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:15:28.159 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:15:28.159 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:15:28.159 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:15:28.159 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:15:28.159 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:15:28.159 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:15:28.159 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:15:28.159 Initialization complete. Launching workers. 00:15:28.159 ======================================================== 00:15:28.159 Latency(us) 00:15:28.159 Device Information : IOPS MiB/s Average min max 00:15:28.159 PCIE (0000:00:10.0) NSID 1 from core 2: 3167.22 12.37 5050.03 1181.40 11715.37 00:15:28.159 PCIE (0000:00:11.0) NSID 1 from core 2: 3167.22 12.37 5051.29 1222.02 12843.86 00:15:28.159 PCIE (0000:00:13.0) NSID 1 from core 2: 3167.22 12.37 5051.41 1235.25 12714.29 00:15:28.159 PCIE (0000:00:12.0) NSID 1 from core 2: 3167.22 12.37 5051.28 1184.80 11371.91 00:15:28.159 PCIE (0000:00:12.0) NSID 2 from core 2: 3167.22 12.37 5050.66 1190.36 11083.28 00:15:28.159 PCIE (0000:00:12.0) NSID 3 from core 2: 3167.22 12.37 5051.02 1107.82 11368.93 00:15:28.159 ======================================================== 00:15:28.159 Total : 19003.33 74.23 5050.95 1107.82 12843.86 00:15:28.159 00:15:28.159 ************************************ 00:15:28.159 END TEST nvme_multi_secondary 00:15:28.159 ************************************ 00:15:28.159 13:34:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65379 00:15:28.160 13:34:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65380 00:15:28.160 00:15:28.160 real 0m11.190s 00:15:28.160 user 0m18.617s 00:15:28.160 sys 0m1.115s 00:15:28.160 13:34:39 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.160 13:34:39 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:15:28.160 13:34:39 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:15:28.160 13:34:39 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:15:28.160 13:34:39 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64312 ]] 00:15:28.160 13:34:39 nvme -- common/autotest_common.sh@1094 -- # kill 64312 00:15:28.160 13:34:39 nvme -- common/autotest_common.sh@1095 -- # wait 64312 00:15:28.160 [2024-11-20 13:34:39.679184] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65252) is not found. Dropping the request. 00:15:28.160 [2024-11-20 13:34:39.679333] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65252) is not found. Dropping the request. 00:15:28.160 [2024-11-20 13:34:39.679439] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65252) is not found. Dropping the request. 00:15:28.160 [2024-11-20 13:34:39.679511] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65252) is not found. Dropping the request. 00:15:28.160 [2024-11-20 13:34:39.685128] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65252) is not found. Dropping the request. 00:15:28.160 [2024-11-20 13:34:39.685211] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65252) is not found. Dropping the request. 00:15:28.160 [2024-11-20 13:34:39.685243] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65252) is not found. Dropping the request. 00:15:28.160 [2024-11-20 13:34:39.685276] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65252) is not found. Dropping the request. 00:15:28.160 [2024-11-20 13:34:39.690200] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65252) is not found. Dropping the request. 00:15:28.160 [2024-11-20 13:34:39.690554] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65252) is not found. Dropping the request. 00:15:28.160 [2024-11-20 13:34:39.690623] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65252) is not found. Dropping the request. 00:15:28.160 [2024-11-20 13:34:39.690661] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65252) is not found. Dropping the request. 00:15:28.160 [2024-11-20 13:34:39.694076] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65252) is not found. Dropping the request. 00:15:28.160 [2024-11-20 13:34:39.694142] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65252) is not found. Dropping the request. 00:15:28.160 [2024-11-20 13:34:39.694164] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65252) is not found. Dropping the request. 00:15:28.160 [2024-11-20 13:34:39.694186] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65252) is not found. Dropping the request. 00:15:28.160 13:34:39 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:15:28.160 13:34:39 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:15:28.160 13:34:39 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:15:28.160 13:34:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:28.160 13:34:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.160 13:34:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.160 ************************************ 00:15:28.160 START TEST bdev_nvme_reset_stuck_adm_cmd 00:15:28.160 ************************************ 00:15:28.160 13:34:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:15:28.160 * Looking for test storage... 00:15:28.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.160 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:28.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.419 --rc genhtml_branch_coverage=1 00:15:28.419 --rc genhtml_function_coverage=1 00:15:28.419 --rc genhtml_legend=1 00:15:28.419 --rc geninfo_all_blocks=1 00:15:28.419 --rc geninfo_unexecuted_blocks=1 00:15:28.419 00:15:28.419 ' 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:28.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.419 --rc genhtml_branch_coverage=1 00:15:28.419 --rc genhtml_function_coverage=1 00:15:28.419 --rc genhtml_legend=1 00:15:28.419 --rc geninfo_all_blocks=1 00:15:28.419 --rc geninfo_unexecuted_blocks=1 00:15:28.419 00:15:28.419 ' 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:28.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.419 --rc genhtml_branch_coverage=1 00:15:28.419 --rc genhtml_function_coverage=1 00:15:28.419 --rc genhtml_legend=1 00:15:28.419 --rc geninfo_all_blocks=1 00:15:28.419 --rc geninfo_unexecuted_blocks=1 00:15:28.419 00:15:28.419 ' 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:28.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.419 --rc genhtml_branch_coverage=1 00:15:28.419 --rc genhtml_function_coverage=1 00:15:28.419 --rc genhtml_legend=1 00:15:28.419 --rc geninfo_all_blocks=1 00:15:28.419 --rc geninfo_unexecuted_blocks=1 00:15:28.419 00:15:28.419 ' 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65541 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65541 00:15:28.419 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65541 ']' 00:15:28.420 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.420 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.420 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.420 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.420 13:34:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:28.420 [2024-11-20 13:34:40.360675] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:15:28.420 [2024-11-20 13:34:40.360963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65541 ] 00:15:28.677 [2024-11-20 13:34:40.573642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:28.936 [2024-11-20 13:34:40.704147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.936 [2024-11-20 13:34:40.704320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.936 [2024-11-20 13:34:40.704478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:28.936 [2024-11-20 13:34:40.704500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.870 13:34:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.870 13:34:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:15:29.870 13:34:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:15:29.870 13:34:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.870 13:34:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:29.870 nvme0n1 00:15:29.870 13:34:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.870 13:34:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:15:29.870 13:34:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_aTXg8.txt 00:15:29.870 13:34:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:15:29.870 13:34:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.870 13:34:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:29.870 true 00:15:29.870 13:34:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.870 13:34:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:15:29.870 13:34:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732109681 00:15:29.870 13:34:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65575 00:15:29.870 13:34:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:15:29.870 13:34:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:29.870 13:34:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:32.430 [2024-11-20 13:34:43.796166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:15:32.430 [2024-11-20 13:34:43.796713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:32.430 [2024-11-20 13:34:43.796895] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:32.430 [2024-11-20 13:34:43.797016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:32.430 [2024-11-20 13:34:43.799274] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65575 00:15:32.430 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65575 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65575 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_aTXg8.txt 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_aTXg8.txt 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65541 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65541 ']' 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65541 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65541 00:15:32.430 killing process with pid 65541 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65541' 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65541 00:15:32.430 13:34:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65541 00:15:34.967 13:34:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:15:34.967 13:34:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:15:34.967 00:15:34.967 real 0m6.534s 00:15:34.967 user 0m22.699s 00:15:34.967 sys 0m0.831s 00:15:34.967 13:34:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.967 ************************************ 00:15:34.967 END TEST bdev_nvme_reset_stuck_adm_cmd 00:15:34.967 ************************************ 00:15:34.967 13:34:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:34.967 13:34:46 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:15:34.967 13:34:46 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:15:34.967 13:34:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:34.967 13:34:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.967 13:34:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:34.967 ************************************ 00:15:34.967 START TEST nvme_fio 00:15:34.967 ************************************ 00:15:34.967 13:34:46 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:15:34.967 13:34:46 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:34.967 13:34:46 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:15:34.967 13:34:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:15:34.967 13:34:46 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:34.967 13:34:46 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:15:34.967 13:34:46 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:34.967 13:34:46 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:34.967 13:34:46 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:34.967 13:34:46 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:15:34.967 13:34:46 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:34.967 13:34:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:15:34.967 13:34:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:15:34.967 13:34:46 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:34.967 13:34:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:34.967 13:34:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:35.226 13:34:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:35.226 13:34:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:35.485 13:34:47 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:35.485 13:34:47 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:15:35.485 13:34:47 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:15:35.485 13:34:47 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:35.485 13:34:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:35.485 13:34:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:35.485 13:34:47 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:35.485 13:34:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:35.485 13:34:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:35.485 13:34:47 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:35.485 13:34:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:35.485 13:34:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:35.485 13:34:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:35.485 13:34:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:35.485 13:34:47 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:35.485 13:34:47 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:35.486 13:34:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:35.486 13:34:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:15:35.746 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:35.746 fio-3.35 00:15:35.746 Starting 1 thread 00:15:39.963 00:15:39.963 test: (groupid=0, jobs=1): err= 0: pid=65728: Wed Nov 20 13:34:51 2024 00:15:39.963 read: IOPS=21.8k, BW=85.1MiB/s (89.3MB/s)(170MiB/2001msec) 00:15:39.963 slat (nsec): min=3906, max=83496, avg=4810.16, stdev=1329.61 00:15:39.963 clat (usec): min=204, max=7906, avg=2933.26, stdev=298.60 00:15:39.963 lat (usec): min=209, max=7944, avg=2938.07, stdev=298.92 00:15:39.963 clat percentiles (usec): 00:15:39.963 | 1.00th=[ 2638], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2802], 00:15:39.963 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:15:39.963 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3130], 95.00th=[ 3621], 00:15:39.963 | 99.00th=[ 4080], 99.50th=[ 4178], 99.90th=[ 4752], 99.95th=[ 6456], 00:15:39.963 | 99.99th=[ 7504] 00:15:39.963 bw ( KiB/s): min=82992, max=89368, per=98.63%, avg=85989.33, stdev=3205.06, samples=3 00:15:39.963 iops : min=20748, max=22342, avg=21497.33, stdev=801.26, samples=3 00:15:39.963 write: IOPS=21.6k, BW=84.5MiB/s (88.7MB/s)(169MiB/2001msec); 0 zone resets 00:15:39.963 slat (usec): min=4, max=185, avg= 4.96, stdev= 1.48 00:15:39.963 clat (usec): min=322, max=7534, avg=2940.41, stdev=302.92 00:15:39.963 lat (usec): min=327, max=7545, avg=2945.37, stdev=303.23 00:15:39.963 clat percentiles (usec): 00:15:39.963 | 1.00th=[ 2638], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2802], 00:15:39.963 | 30.00th=[ 2835], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:15:39.963 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3130], 95.00th=[ 3654], 00:15:39.963 | 99.00th=[ 4113], 99.50th=[ 4178], 99.90th=[ 4883], 99.95th=[ 6521], 00:15:39.963 | 99.99th=[ 7439] 00:15:39.963 bw ( KiB/s): min=83224, max=89888, per=99.51%, avg=86157.33, stdev=3402.80, samples=3 00:15:39.963 iops : min=20806, max=22472, avg=21539.33, stdev=850.70, samples=3 00:15:39.963 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:15:39.963 lat (msec) : 2=0.12%, 4=97.95%, 10=1.89% 00:15:39.963 cpu : usr=99.45%, sys=0.00%, ctx=3, majf=0, minf=607 00:15:39.963 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:39.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:39.963 issued rwts: total=43615,43311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:39.963 00:15:39.963 Run status group 0 (all jobs): 00:15:39.963 READ: bw=85.1MiB/s (89.3MB/s), 85.1MiB/s-85.1MiB/s (89.3MB/s-89.3MB/s), io=170MiB (179MB), run=2001-2001msec 00:15:39.963 WRITE: bw=84.5MiB/s (88.7MB/s), 84.5MiB/s-84.5MiB/s (88.7MB/s-88.7MB/s), io=169MiB (177MB), run=2001-2001msec 00:15:39.963 ----------------------------------------------------- 00:15:39.963 Suppressions used: 00:15:39.963 count bytes template 00:15:39.963 1 32 /usr/src/fio/parse.c 00:15:39.963 1 8 libtcmalloc_minimal.so 00:15:39.963 ----------------------------------------------------- 00:15:39.963 00:15:39.963 13:34:51 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:39.963 13:34:51 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:39.963 13:34:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:15:39.963 13:34:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:40.221 13:34:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:15:40.221 13:34:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:40.479 13:34:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:40.479 13:34:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:15:40.479 13:34:52 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:15:40.479 13:34:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:40.479 13:34:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:40.479 13:34:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:40.479 13:34:52 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:40.479 13:34:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:40.479 13:34:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:40.479 13:34:52 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:40.479 13:34:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:40.479 13:34:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:40.479 13:34:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:40.479 13:34:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:40.479 13:34:52 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:40.479 13:34:52 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:40.479 13:34:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:40.479 13:34:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:15:40.738 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:40.738 fio-3.35 00:15:40.738 Starting 1 thread 00:15:44.927 00:15:44.927 test: (groupid=0, jobs=1): err= 0: pid=65794: Wed Nov 20 13:34:56 2024 00:15:44.927 read: IOPS=21.7k, BW=84.9MiB/s (89.0MB/s)(170MiB/2001msec) 00:15:44.927 slat (usec): min=4, max=391, avg= 4.85, stdev= 2.96 00:15:44.927 clat (usec): min=250, max=9043, avg=2942.52, stdev=358.95 00:15:44.927 lat (usec): min=255, max=9078, avg=2947.37, stdev=359.64 00:15:44.927 clat percentiles (usec): 00:15:44.927 | 1.00th=[ 2409], 5.00th=[ 2704], 10.00th=[ 2769], 20.00th=[ 2802], 00:15:44.927 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2900], 00:15:44.927 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3064], 95.00th=[ 3392], 00:15:44.927 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5276], 99.95th=[ 6849], 00:15:44.927 | 99.99th=[ 8848] 00:15:44.927 bw ( KiB/s): min=80696, max=88488, per=98.25%, avg=85397.33, stdev=4138.18, samples=3 00:15:44.927 iops : min=20174, max=22122, avg=21349.33, stdev=1034.54, samples=3 00:15:44.927 write: IOPS=21.6k, BW=84.3MiB/s (88.3MB/s)(169MiB/2001msec); 0 zone resets 00:15:44.927 slat (usec): min=4, max=342, avg= 5.02, stdev= 2.63 00:15:44.927 clat (usec): min=195, max=8894, avg=2949.79, stdev=366.91 00:15:44.927 lat (usec): min=200, max=8906, avg=2954.81, stdev=367.62 00:15:44.927 clat percentiles (usec): 00:15:44.927 | 1.00th=[ 2442], 5.00th=[ 2704], 10.00th=[ 2769], 20.00th=[ 2802], 00:15:44.927 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:15:44.927 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3064], 95.00th=[ 3425], 00:15:44.927 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5669], 99.95th=[ 7046], 00:15:44.927 | 99.99th=[ 8586] 00:15:44.927 bw ( KiB/s): min=80528, max=89000, per=99.18%, avg=85573.33, stdev=4461.92, samples=3 00:15:44.927 iops : min=20132, max=22250, avg=21393.33, stdev=1115.48, samples=3 00:15:44.927 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:15:44.927 lat (msec) : 2=0.19%, 4=96.06%, 10=3.70% 00:15:44.927 cpu : usr=99.00%, sys=0.25%, ctx=21, majf=0, minf=607 00:15:44.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:44.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:44.927 issued rwts: total=43482,43161,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:44.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:44.927 00:15:44.927 Run status group 0 (all jobs): 00:15:44.927 READ: bw=84.9MiB/s (89.0MB/s), 84.9MiB/s-84.9MiB/s (89.0MB/s-89.0MB/s), io=170MiB (178MB), run=2001-2001msec 00:15:44.927 WRITE: bw=84.3MiB/s (88.3MB/s), 84.3MiB/s-84.3MiB/s (88.3MB/s-88.3MB/s), io=169MiB (177MB), run=2001-2001msec 00:15:44.927 ----------------------------------------------------- 00:15:44.927 Suppressions used: 00:15:44.927 count bytes template 00:15:44.927 1 32 /usr/src/fio/parse.c 00:15:44.927 1 8 libtcmalloc_minimal.so 00:15:44.927 ----------------------------------------------------- 00:15:44.927 00:15:44.927 13:34:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:44.927 13:34:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:44.927 13:34:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:15:44.927 13:34:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:44.927 13:34:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:44.927 13:34:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:15:44.927 13:34:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:44.927 13:34:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:44.927 13:34:56 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:44.927 13:34:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:44.927 13:34:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:44.927 13:34:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:44.927 13:34:56 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:44.927 13:34:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:44.927 13:34:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:44.927 13:34:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:44.927 13:34:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:44.927 13:34:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:44.927 13:34:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:44.927 13:34:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:44.927 13:34:56 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:44.927 13:34:56 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:44.927 13:34:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:44.927 13:34:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:45.187 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:45.187 fio-3.35 00:15:45.187 Starting 1 thread 00:15:49.377 00:15:49.377 test: (groupid=0, jobs=1): err= 0: pid=65855: Wed Nov 20 13:35:00 2024 00:15:49.377 read: IOPS=22.0k, BW=85.8MiB/s (89.9MB/s)(172MiB/2001msec) 00:15:49.377 slat (nsec): min=3890, max=76639, avg=4771.39, stdev=1312.34 00:15:49.377 clat (usec): min=239, max=9957, avg=2905.77, stdev=421.66 00:15:49.377 lat (usec): min=244, max=10016, avg=2910.54, stdev=422.27 00:15:49.377 clat percentiles (usec): 00:15:49.377 | 1.00th=[ 2376], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2802], 00:15:49.377 | 30.00th=[ 2835], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:15:49.377 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3064], 00:15:49.377 | 99.00th=[ 4948], 99.50th=[ 6063], 99.90th=[ 8094], 99.95th=[ 8291], 00:15:49.377 | 99.99th=[ 9634] 00:15:49.377 bw ( KiB/s): min=85528, max=89536, per=98.95%, avg=86900.00, stdev=2283.48, samples=3 00:15:49.377 iops : min=21382, max=22384, avg=21725.00, stdev=570.87, samples=3 00:15:49.377 write: IOPS=21.8k, BW=85.2MiB/s (89.3MB/s)(170MiB/2001msec); 0 zone resets 00:15:49.377 slat (nsec): min=4095, max=50291, avg=4936.13, stdev=1218.34 00:15:49.377 clat (usec): min=209, max=9767, avg=2915.45, stdev=451.83 00:15:49.377 lat (usec): min=214, max=9789, avg=2920.39, stdev=452.44 00:15:49.377 clat percentiles (usec): 00:15:49.377 | 1.00th=[ 2442], 5.00th=[ 2704], 10.00th=[ 2769], 20.00th=[ 2802], 00:15:49.377 | 30.00th=[ 2835], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:15:49.377 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3097], 00:15:49.377 | 99.00th=[ 5342], 99.50th=[ 6587], 99.90th=[ 8225], 99.95th=[ 8291], 00:15:49.377 | 99.99th=[ 9241] 00:15:49.377 bw ( KiB/s): min=85405, max=90456, per=99.85%, avg=87116.33, stdev=2892.53, samples=3 00:15:49.377 iops : min=21351, max=22614, avg=21779.00, stdev=723.21, samples=3 00:15:49.377 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:15:49.377 lat (msec) : 2=0.40%, 4=97.90%, 10=1.65% 00:15:49.377 cpu : usr=99.25%, sys=0.20%, ctx=4, majf=0, minf=607 00:15:49.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:49.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:49.377 issued rwts: total=43935,43647,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.377 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:49.377 00:15:49.377 Run status group 0 (all jobs): 00:15:49.377 READ: bw=85.8MiB/s (89.9MB/s), 85.8MiB/s-85.8MiB/s (89.9MB/s-89.9MB/s), io=172MiB (180MB), run=2001-2001msec 00:15:49.377 WRITE: bw=85.2MiB/s (89.3MB/s), 85.2MiB/s-85.2MiB/s (89.3MB/s-89.3MB/s), io=170MiB (179MB), run=2001-2001msec 00:15:49.377 ----------------------------------------------------- 00:15:49.377 Suppressions used: 00:15:49.377 count bytes template 00:15:49.377 1 32 /usr/src/fio/parse.c 00:15:49.377 1 8 libtcmalloc_minimal.so 00:15:49.377 ----------------------------------------------------- 00:15:49.377 00:15:49.377 13:35:00 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:49.377 13:35:00 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:49.377 13:35:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:49.377 13:35:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:49.377 13:35:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:49.377 13:35:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:49.636 13:35:01 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:49.636 13:35:01 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:49.636 13:35:01 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:49.636 13:35:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:49.636 13:35:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:49.636 13:35:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:49.636 13:35:01 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:49.636 13:35:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:49.636 13:35:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:49.636 13:35:01 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:49.636 13:35:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:49.636 13:35:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:49.636 13:35:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:49.636 13:35:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:49.636 13:35:01 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:49.636 13:35:01 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:49.636 13:35:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:49.636 13:35:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:49.893 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:49.893 fio-3.35 00:15:49.893 Starting 1 thread 00:15:56.454 00:15:56.454 test: (groupid=0, jobs=1): err= 0: pid=65922: Wed Nov 20 13:35:07 2024 00:15:56.454 read: IOPS=21.5k, BW=83.9MiB/s (87.9MB/s)(168MiB/2001msec) 00:15:56.454 slat (usec): min=3, max=350, avg= 4.81, stdev= 2.10 00:15:56.454 clat (usec): min=258, max=11489, avg=2979.19, stdev=401.84 00:15:56.454 lat (usec): min=264, max=11547, avg=2983.99, stdev=402.31 00:15:56.454 clat percentiles (usec): 00:15:56.454 | 1.00th=[ 2442], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2835], 00:15:56.454 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:15:56.454 | 70.00th=[ 2966], 80.00th=[ 3032], 90.00th=[ 3261], 95.00th=[ 3654], 00:15:56.454 | 99.00th=[ 4424], 99.50th=[ 4752], 99.90th=[ 6652], 99.95th=[ 9110], 00:15:56.454 | 99.99th=[11207] 00:15:56.454 bw ( KiB/s): min=84904, max=88544, per=100.00%, avg=86544.00, stdev=1846.51, samples=3 00:15:56.454 iops : min=21226, max=22136, avg=21636.00, stdev=461.63, samples=3 00:15:56.454 write: IOPS=21.3k, BW=83.2MiB/s (87.3MB/s)(167MiB/2001msec); 0 zone resets 00:15:56.454 slat (usec): min=3, max=298, avg= 4.96, stdev= 2.15 00:15:56.454 clat (usec): min=292, max=11327, avg=2979.39, stdev=399.40 00:15:56.454 lat (usec): min=298, max=11347, avg=2984.35, stdev=399.85 00:15:56.454 clat percentiles (usec): 00:15:56.454 | 1.00th=[ 2474], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2835], 00:15:56.454 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:15:56.454 | 70.00th=[ 2966], 80.00th=[ 3032], 90.00th=[ 3228], 95.00th=[ 3621], 00:15:56.454 | 99.00th=[ 4424], 99.50th=[ 4817], 99.90th=[ 6915], 99.95th=[ 9241], 00:15:56.454 | 99.99th=[10945] 00:15:56.454 bw ( KiB/s): min=85048, max=89464, per=100.00%, avg=86760.00, stdev=2369.24, samples=3 00:15:56.454 iops : min=21262, max=22366, avg=21690.00, stdev=592.31, samples=3 00:15:56.454 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:15:56.454 lat (msec) : 2=0.53%, 4=96.64%, 10=2.76%, 20=0.03% 00:15:56.454 cpu : usr=98.60%, sys=0.50%, ctx=10, majf=0, minf=606 00:15:56.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:56.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:56.454 issued rwts: total=42966,42629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:56.454 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:56.454 00:15:56.454 Run status group 0 (all jobs): 00:15:56.454 READ: bw=83.9MiB/s (87.9MB/s), 83.9MiB/s-83.9MiB/s (87.9MB/s-87.9MB/s), io=168MiB (176MB), run=2001-2001msec 00:15:56.454 WRITE: bw=83.2MiB/s (87.3MB/s), 83.2MiB/s-83.2MiB/s (87.3MB/s-87.3MB/s), io=167MiB (175MB), run=2001-2001msec 00:15:56.454 ----------------------------------------------------- 00:15:56.454 Suppressions used: 00:15:56.454 count bytes template 00:15:56.454 1 32 /usr/src/fio/parse.c 00:15:56.454 1 8 libtcmalloc_minimal.so 00:15:56.454 ----------------------------------------------------- 00:15:56.454 00:15:56.455 13:35:07 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:56.455 13:35:07 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:15:56.455 00:15:56.455 real 0m21.413s 00:15:56.455 user 0m16.848s 00:15:56.455 sys 0m4.135s 00:15:56.455 13:35:07 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.455 13:35:07 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:15:56.455 ************************************ 00:15:56.455 END TEST nvme_fio 00:15:56.455 ************************************ 00:15:56.455 00:15:56.455 real 1m37.318s 00:15:56.455 user 3m46.760s 00:15:56.455 sys 0m23.743s 00:15:56.455 13:35:07 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.455 ************************************ 00:15:56.455 END TEST nvme 00:15:56.455 ************************************ 00:15:56.455 13:35:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:56.455 13:35:08 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:15:56.455 13:35:08 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:15:56.455 13:35:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:56.455 13:35:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.455 13:35:08 -- common/autotest_common.sh@10 -- # set +x 00:15:56.455 ************************************ 00:15:56.455 START TEST nvme_scc 00:15:56.455 ************************************ 00:15:56.455 13:35:08 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:15:56.455 * Looking for test storage... 00:15:56.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:56.455 13:35:08 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:56.455 13:35:08 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:15:56.455 13:35:08 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:56.455 13:35:08 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@345 -- # : 1 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@368 -- # return 0 00:15:56.455 13:35:08 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:56.455 13:35:08 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:56.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.455 --rc genhtml_branch_coverage=1 00:15:56.455 --rc genhtml_function_coverage=1 00:15:56.455 --rc genhtml_legend=1 00:15:56.455 --rc geninfo_all_blocks=1 00:15:56.455 --rc geninfo_unexecuted_blocks=1 00:15:56.455 00:15:56.455 ' 00:15:56.455 13:35:08 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:56.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.455 --rc genhtml_branch_coverage=1 00:15:56.455 --rc genhtml_function_coverage=1 00:15:56.455 --rc genhtml_legend=1 00:15:56.455 --rc geninfo_all_blocks=1 00:15:56.455 --rc geninfo_unexecuted_blocks=1 00:15:56.455 00:15:56.455 ' 00:15:56.455 13:35:08 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:56.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.455 --rc genhtml_branch_coverage=1 00:15:56.455 --rc genhtml_function_coverage=1 00:15:56.455 --rc genhtml_legend=1 00:15:56.455 --rc geninfo_all_blocks=1 00:15:56.455 --rc geninfo_unexecuted_blocks=1 00:15:56.455 00:15:56.455 ' 00:15:56.455 13:35:08 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:56.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.455 --rc genhtml_branch_coverage=1 00:15:56.455 --rc genhtml_function_coverage=1 00:15:56.455 --rc genhtml_legend=1 00:15:56.455 --rc geninfo_all_blocks=1 00:15:56.455 --rc geninfo_unexecuted_blocks=1 00:15:56.455 00:15:56.455 ' 00:15:56.455 13:35:08 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:56.455 13:35:08 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:56.455 13:35:08 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:15:56.455 13:35:08 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:56.455 13:35:08 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.455 13:35:08 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.455 13:35:08 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.455 13:35:08 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.455 13:35:08 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.455 13:35:08 nvme_scc -- paths/export.sh@5 -- # export PATH 00:15:56.455 13:35:08 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.455 13:35:08 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:15:56.455 13:35:08 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:15:56.455 13:35:08 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:15:56.455 13:35:08 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:15:56.455 13:35:08 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:15:56.455 13:35:08 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:15:56.455 13:35:08 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:15:56.455 13:35:08 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:15:56.455 13:35:08 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:15:56.455 13:35:08 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:56.455 13:35:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:15:56.455 13:35:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:15:56.455 13:35:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:15:56.455 13:35:08 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:57.023 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:57.023 Waiting for block devices as requested 00:15:57.281 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:57.281 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:57.281 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:57.537 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:02.858 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:02.858 13:35:14 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:16:02.858 13:35:14 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:02.858 13:35:14 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:16:02.858 13:35:14 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:02.858 13:35:14 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:02.858 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.859 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.860 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:16:02.861 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:02.862 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.863 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:16:02.864 13:35:14 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:16:02.865 13:35:14 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:02.865 13:35:14 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:16:02.865 13:35:14 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:02.865 13:35:14 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:16:02.865 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.866 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:02.867 13:35:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:16:02.868 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.869 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.870 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:16:02.871 13:35:14 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:02.871 13:35:14 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:16:02.871 13:35:14 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:02.871 13:35:14 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:02.871 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.872 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.873 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:02.874 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:02.875 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:16:02.876 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:02.877 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.141 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.142 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:16:03.143 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:16:03.144 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.145 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.146 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:16:03.147 13:35:14 nvme_scc -- scripts/common.sh@18 -- # local i 00:16:03.147 13:35:14 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:16:03.147 13:35:14 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:03.147 13:35:14 nvme_scc -- scripts/common.sh@27 -- # return 0 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.147 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.148 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.149 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.150 13:35:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:16:03.151 13:35:14 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:16:03.151 13:35:14 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:16:03.151 13:35:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:16:03.151 13:35:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:16:03.151 13:35:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:16:03.151 13:35:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:16:03.151 13:35:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:16:03.151 13:35:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:16:03.151 13:35:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:16:03.151 13:35:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:16:03.151 13:35:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:16:03.151 13:35:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:16:03.151 13:35:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:16:03.151 13:35:15 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:16:03.151 13:35:15 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:16:03.151 13:35:15 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:16:03.151 13:35:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:16:03.151 13:35:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:16:03.151 13:35:15 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:03.720 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:04.656 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:04.656 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:04.656 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:04.656 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:04.656 13:35:16 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:16:04.656 13:35:16 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:04.656 13:35:16 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.656 13:35:16 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:16:04.656 ************************************ 00:16:04.656 START TEST nvme_simple_copy 00:16:04.656 ************************************ 00:16:04.656 13:35:16 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:16:04.914 Initializing NVMe Controllers 00:16:04.914 Attaching to 0000:00:10.0 00:16:04.914 Controller supports SCC. Attached to 0000:00:10.0 00:16:04.914 Namespace ID: 1 size: 6GB 00:16:04.914 Initialization complete. 00:16:04.914 00:16:04.914 Controller QEMU NVMe Ctrl (12340 ) 00:16:04.914 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:16:04.914 Namespace Block Size:4096 00:16:04.914 Writing LBAs 0 to 63 with Random Data 00:16:04.914 Copied LBAs from 0 - 63 to the Destination LBA 256 00:16:04.914 LBAs matching Written Data: 64 00:16:05.173 00:16:05.173 real 0m0.318s 00:16:05.173 user 0m0.106s 00:16:05.173 sys 0m0.110s 00:16:05.173 13:35:16 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.173 13:35:16 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:16:05.173 ************************************ 00:16:05.173 END TEST nvme_simple_copy 00:16:05.173 ************************************ 00:16:05.173 00:16:05.173 real 0m8.899s 00:16:05.173 user 0m1.575s 00:16:05.173 sys 0m2.335s 00:16:05.173 13:35:16 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.173 ************************************ 00:16:05.173 END TEST nvme_scc 00:16:05.173 13:35:16 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:16:05.173 ************************************ 00:16:05.173 13:35:16 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:16:05.173 13:35:16 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:16:05.173 13:35:16 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:16:05.173 13:35:16 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:16:05.173 13:35:16 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:16:05.173 13:35:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:05.173 13:35:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.173 13:35:16 -- common/autotest_common.sh@10 -- # set +x 00:16:05.173 ************************************ 00:16:05.173 START TEST nvme_fdp 00:16:05.173 ************************************ 00:16:05.173 13:35:17 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:16:05.173 * Looking for test storage... 00:16:05.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:05.173 13:35:17 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:05.173 13:35:17 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:05.173 13:35:17 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:16:05.431 13:35:17 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:05.431 13:35:17 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:05.431 13:35:17 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:05.431 13:35:17 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:05.431 13:35:17 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:16:05.431 13:35:17 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:16:05.432 13:35:17 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:05.432 13:35:17 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:05.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.432 --rc genhtml_branch_coverage=1 00:16:05.432 --rc genhtml_function_coverage=1 00:16:05.432 --rc genhtml_legend=1 00:16:05.432 --rc geninfo_all_blocks=1 00:16:05.432 --rc geninfo_unexecuted_blocks=1 00:16:05.432 00:16:05.432 ' 00:16:05.432 13:35:17 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:05.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.432 --rc genhtml_branch_coverage=1 00:16:05.432 --rc genhtml_function_coverage=1 00:16:05.432 --rc genhtml_legend=1 00:16:05.432 --rc geninfo_all_blocks=1 00:16:05.432 --rc geninfo_unexecuted_blocks=1 00:16:05.432 00:16:05.432 ' 00:16:05.432 13:35:17 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:05.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.432 --rc genhtml_branch_coverage=1 00:16:05.432 --rc genhtml_function_coverage=1 00:16:05.432 --rc genhtml_legend=1 00:16:05.432 --rc geninfo_all_blocks=1 00:16:05.432 --rc geninfo_unexecuted_blocks=1 00:16:05.432 00:16:05.432 ' 00:16:05.432 13:35:17 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:05.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.432 --rc genhtml_branch_coverage=1 00:16:05.432 --rc genhtml_function_coverage=1 00:16:05.432 --rc genhtml_legend=1 00:16:05.432 --rc geninfo_all_blocks=1 00:16:05.432 --rc geninfo_unexecuted_blocks=1 00:16:05.432 00:16:05.432 ' 00:16:05.432 13:35:17 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:05.432 13:35:17 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:05.432 13:35:17 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:16:05.432 13:35:17 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:05.432 13:35:17 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.432 13:35:17 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.432 13:35:17 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.432 13:35:17 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.432 13:35:17 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.432 13:35:17 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:16:05.432 13:35:17 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.432 13:35:17 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:16:05.432 13:35:17 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:16:05.432 13:35:17 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:16:05.432 13:35:17 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:16:05.432 13:35:17 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:16:05.432 13:35:17 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:16:05.432 13:35:17 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:16:05.432 13:35:17 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:16:05.432 13:35:17 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:16:05.432 13:35:17 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:05.432 13:35:17 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:06.000 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:06.260 Waiting for block devices as requested 00:16:06.260 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:06.519 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:06.519 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:06.778 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:12.108 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:12.108 13:35:23 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:16:12.108 13:35:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:12.108 13:35:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:16:12.108 13:35:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:12.108 13:35:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.108 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:16:12.109 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.110 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:16:12.111 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:16:12.112 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:16:12.113 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.114 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:16:12.115 13:35:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:12.115 13:35:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:16:12.115 13:35:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:12.115 13:35:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.115 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:16:12.116 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:16:12.117 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.118 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:16:12.119 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:16:12.120 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:16:12.121 13:35:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:12.121 13:35:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:16:12.121 13:35:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:12.121 13:35:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:16:12.121 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:16:12.122 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.123 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:16:12.124 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.125 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:16:12.126 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:16:12.127 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:16:12.128 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.129 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.130 13:35:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:12.131 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:12.131 13:35:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:12.131 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:16:12.131 13:35:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:12.131 13:35:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:16:12.131 13:35:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:16:12.131 13:35:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:16:12.131 13:35:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:16:12.131 13:35:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:12.131 13:35:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:16:12.131 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.131 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.132 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:16:12.133 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:16:12.134 13:35:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:16:12.134 13:35:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:16:12.134 13:35:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:12.134 13:35:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.134 13:35:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.394 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.395 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.396 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.397 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:16:12.398 13:35:24 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:16:12.398 13:35:24 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:16:12.398 13:35:24 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:16:12.398 13:35:24 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:16:12.398 13:35:24 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:12.966 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:13.899 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:13.899 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:13.899 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:13.899 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:13.899 13:35:25 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:16:13.899 13:35:25 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:13.899 13:35:25 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.899 13:35:25 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:16:13.899 ************************************ 00:16:13.899 START TEST nvme_flexible_data_placement 00:16:13.899 ************************************ 00:16:13.899 13:35:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:16:14.158 Initializing NVMe Controllers 00:16:14.158 Attaching to 0000:00:13.0 00:16:14.158 Controller supports FDP Attached to 0000:00:13.0 00:16:14.158 Namespace ID: 1 Endurance Group ID: 1 00:16:14.158 Initialization complete. 00:16:14.158 00:16:14.158 ================================== 00:16:14.158 == FDP tests for Namespace: #01 == 00:16:14.158 ================================== 00:16:14.158 00:16:14.158 Get Feature: FDP: 00:16:14.158 ================= 00:16:14.158 Enabled: Yes 00:16:14.158 FDP configuration Index: 0 00:16:14.158 00:16:14.158 FDP configurations log page 00:16:14.158 =========================== 00:16:14.158 Number of FDP configurations: 1 00:16:14.158 Version: 0 00:16:14.158 Size: 112 00:16:14.158 FDP Configuration Descriptor: 0 00:16:14.158 Descriptor Size: 96 00:16:14.158 Reclaim Group Identifier format: 2 00:16:14.158 FDP Volatile Write Cache: Not Present 00:16:14.158 FDP Configuration: Valid 00:16:14.158 Vendor Specific Size: 0 00:16:14.158 Number of Reclaim Groups: 2 00:16:14.158 Number of Recalim Unit Handles: 8 00:16:14.158 Max Placement Identifiers: 128 00:16:14.158 Number of Namespaces Suppprted: 256 00:16:14.158 Reclaim unit Nominal Size: 6000000 bytes 00:16:14.158 Estimated Reclaim Unit Time Limit: Not Reported 00:16:14.158 RUH Desc #000: RUH Type: Initially Isolated 00:16:14.158 RUH Desc #001: RUH Type: Initially Isolated 00:16:14.158 RUH Desc #002: RUH Type: Initially Isolated 00:16:14.158 RUH Desc #003: RUH Type: Initially Isolated 00:16:14.158 RUH Desc #004: RUH Type: Initially Isolated 00:16:14.158 RUH Desc #005: RUH Type: Initially Isolated 00:16:14.158 RUH Desc #006: RUH Type: Initially Isolated 00:16:14.158 RUH Desc #007: RUH Type: Initially Isolated 00:16:14.158 00:16:14.158 FDP reclaim unit handle usage log page 00:16:14.158 ====================================== 00:16:14.158 Number of Reclaim Unit Handles: 8 00:16:14.158 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:16:14.158 RUH Usage Desc #001: RUH Attributes: Unused 00:16:14.158 RUH Usage Desc #002: RUH Attributes: Unused 00:16:14.158 RUH Usage Desc #003: RUH Attributes: Unused 00:16:14.158 RUH Usage Desc #004: RUH Attributes: Unused 00:16:14.158 RUH Usage Desc #005: RUH Attributes: Unused 00:16:14.158 RUH Usage Desc #006: RUH Attributes: Unused 00:16:14.158 RUH Usage Desc #007: RUH Attributes: Unused 00:16:14.158 00:16:14.158 FDP statistics log page 00:16:14.158 ======================= 00:16:14.158 Host bytes with metadata written: 928841728 00:16:14.158 Media bytes with metadata written: 929005568 00:16:14.158 Media bytes erased: 0 00:16:14.158 00:16:14.158 FDP Reclaim unit handle status 00:16:14.158 ============================== 00:16:14.158 Number of RUHS descriptors: 2 00:16:14.158 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004a30 00:16:14.158 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:16:14.158 00:16:14.158 FDP write on placement id: 0 success 00:16:14.158 00:16:14.158 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:16:14.158 00:16:14.158 IO mgmt send: RUH update for Placement ID: #0 Success 00:16:14.158 00:16:14.158 Get Feature: FDP Events for Placement handle: #0 00:16:14.158 ======================== 00:16:14.158 Number of FDP Events: 6 00:16:14.158 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:16:14.158 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:16:14.158 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:16:14.158 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:16:14.158 FDP Event: #4 Type: Media Reallocated Enabled: No 00:16:14.158 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:16:14.158 00:16:14.158 FDP events log page 00:16:14.158 =================== 00:16:14.158 Number of FDP events: 1 00:16:14.158 FDP Event #0: 00:16:14.158 Event Type: RU Not Written to Capacity 00:16:14.158 Placement Identifier: Valid 00:16:14.158 NSID: Valid 00:16:14.158 Location: Valid 00:16:14.158 Placement Identifier: 0 00:16:14.158 Event Timestamp: 8 00:16:14.158 Namespace Identifier: 1 00:16:14.158 Reclaim Group Identifier: 0 00:16:14.158 Reclaim Unit Handle Identifier: 0 00:16:14.158 00:16:14.158 FDP test passed 00:16:14.158 00:16:14.158 real 0m0.284s 00:16:14.158 user 0m0.090s 00:16:14.158 sys 0m0.093s 00:16:14.158 13:35:26 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.158 13:35:26 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:16:14.158 ************************************ 00:16:14.158 END TEST nvme_flexible_data_placement 00:16:14.158 ************************************ 00:16:14.416 00:16:14.416 real 0m9.112s 00:16:14.416 user 0m1.611s 00:16:14.416 sys 0m2.578s 00:16:14.416 13:35:26 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.416 13:35:26 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:16:14.416 ************************************ 00:16:14.416 END TEST nvme_fdp 00:16:14.416 ************************************ 00:16:14.416 13:35:26 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:16:14.416 13:35:26 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:16:14.416 13:35:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:14.416 13:35:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.416 13:35:26 -- common/autotest_common.sh@10 -- # set +x 00:16:14.416 ************************************ 00:16:14.416 START TEST nvme_rpc 00:16:14.416 ************************************ 00:16:14.416 13:35:26 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:16:14.416 * Looking for test storage... 00:16:14.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:14.416 13:35:26 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:14.416 13:35:26 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:14.416 13:35:26 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:14.675 13:35:26 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:14.675 13:35:26 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:14.675 13:35:26 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:14.675 13:35:26 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:14.675 13:35:26 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.675 13:35:26 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:14.675 13:35:26 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:14.675 13:35:26 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:14.675 13:35:26 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:14.675 13:35:26 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:14.675 13:35:26 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:14.675 13:35:26 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:14.675 13:35:26 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:14.675 13:35:26 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:16:14.675 13:35:26 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:14.675 13:35:26 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.675 13:35:26 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:14.675 13:35:26 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:16:14.675 13:35:26 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.676 13:35:26 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:16:14.676 13:35:26 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:14.676 13:35:26 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:14.676 13:35:26 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:16:14.676 13:35:26 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.676 13:35:26 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:16:14.676 13:35:26 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:14.676 13:35:26 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:14.676 13:35:26 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:14.676 13:35:26 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:14.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.676 --rc genhtml_branch_coverage=1 00:16:14.676 --rc genhtml_function_coverage=1 00:16:14.676 --rc genhtml_legend=1 00:16:14.676 --rc geninfo_all_blocks=1 00:16:14.676 --rc geninfo_unexecuted_blocks=1 00:16:14.676 00:16:14.676 ' 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:14.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.676 --rc genhtml_branch_coverage=1 00:16:14.676 --rc genhtml_function_coverage=1 00:16:14.676 --rc genhtml_legend=1 00:16:14.676 --rc geninfo_all_blocks=1 00:16:14.676 --rc geninfo_unexecuted_blocks=1 00:16:14.676 00:16:14.676 ' 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:14.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.676 --rc genhtml_branch_coverage=1 00:16:14.676 --rc genhtml_function_coverage=1 00:16:14.676 --rc genhtml_legend=1 00:16:14.676 --rc geninfo_all_blocks=1 00:16:14.676 --rc geninfo_unexecuted_blocks=1 00:16:14.676 00:16:14.676 ' 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:14.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.676 --rc genhtml_branch_coverage=1 00:16:14.676 --rc genhtml_function_coverage=1 00:16:14.676 --rc genhtml_legend=1 00:16:14.676 --rc geninfo_all_blocks=1 00:16:14.676 --rc geninfo_unexecuted_blocks=1 00:16:14.676 00:16:14.676 ' 00:16:14.676 13:35:26 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:14.676 13:35:26 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:16:14.676 13:35:26 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:16:14.676 13:35:26 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67331 00:16:14.676 13:35:26 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:16:14.676 13:35:26 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:16:14.676 13:35:26 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67331 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67331 ']' 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:14.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:14.676 13:35:26 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.934 [2024-11-20 13:35:26.689693] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:16:14.934 [2024-11-20 13:35:26.689835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67331 ] 00:16:14.934 [2024-11-20 13:35:26.875958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:15.193 [2024-11-20 13:35:26.998391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.193 [2024-11-20 13:35:26.998425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.131 13:35:27 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.131 13:35:27 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:16.131 13:35:27 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:16:16.389 Nvme0n1 00:16:16.389 13:35:28 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:16:16.389 13:35:28 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:16:16.648 request: 00:16:16.648 { 00:16:16.648 "bdev_name": "Nvme0n1", 00:16:16.648 "filename": "non_existing_file", 00:16:16.648 "method": "bdev_nvme_apply_firmware", 00:16:16.648 "req_id": 1 00:16:16.648 } 00:16:16.648 Got JSON-RPC error response 00:16:16.648 response: 00:16:16.648 { 00:16:16.648 "code": -32603, 00:16:16.648 "message": "open file failed." 00:16:16.648 } 00:16:16.648 13:35:28 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:16:16.648 13:35:28 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:16:16.648 13:35:28 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:16:16.907 13:35:28 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:16.907 13:35:28 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67331 00:16:16.907 13:35:28 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67331 ']' 00:16:16.907 13:35:28 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67331 00:16:16.907 13:35:28 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:16.907 13:35:28 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.907 13:35:28 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67331 00:16:16.907 killing process with pid 67331 00:16:16.907 13:35:28 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:16.907 13:35:28 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:16.907 13:35:28 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67331' 00:16:16.907 13:35:28 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67331 00:16:16.907 13:35:28 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67331 00:16:19.434 00:16:19.434 real 0m4.895s 00:16:19.434 user 0m8.891s 00:16:19.434 sys 0m0.821s 00:16:19.434 ************************************ 00:16:19.434 END TEST nvme_rpc 00:16:19.434 ************************************ 00:16:19.434 13:35:31 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:19.434 13:35:31 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.434 13:35:31 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:16:19.434 13:35:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:19.434 13:35:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:19.434 13:35:31 -- common/autotest_common.sh@10 -- # set +x 00:16:19.434 ************************************ 00:16:19.434 START TEST nvme_rpc_timeouts 00:16:19.434 ************************************ 00:16:19.434 13:35:31 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:16:19.434 * Looking for test storage... 00:16:19.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:19.434 13:35:31 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:19.434 13:35:31 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:16:19.434 13:35:31 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:19.434 13:35:31 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:16:19.434 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:19.435 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:19.435 13:35:31 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:16:19.435 13:35:31 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:19.435 13:35:31 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:19.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.435 --rc genhtml_branch_coverage=1 00:16:19.435 --rc genhtml_function_coverage=1 00:16:19.435 --rc genhtml_legend=1 00:16:19.435 --rc geninfo_all_blocks=1 00:16:19.435 --rc geninfo_unexecuted_blocks=1 00:16:19.435 00:16:19.435 ' 00:16:19.435 13:35:31 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:19.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.435 --rc genhtml_branch_coverage=1 00:16:19.435 --rc genhtml_function_coverage=1 00:16:19.435 --rc genhtml_legend=1 00:16:19.435 --rc geninfo_all_blocks=1 00:16:19.435 --rc geninfo_unexecuted_blocks=1 00:16:19.435 00:16:19.435 ' 00:16:19.435 13:35:31 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:19.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.435 --rc genhtml_branch_coverage=1 00:16:19.435 --rc genhtml_function_coverage=1 00:16:19.435 --rc genhtml_legend=1 00:16:19.435 --rc geninfo_all_blocks=1 00:16:19.435 --rc geninfo_unexecuted_blocks=1 00:16:19.435 00:16:19.435 ' 00:16:19.435 13:35:31 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:19.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.435 --rc genhtml_branch_coverage=1 00:16:19.435 --rc genhtml_function_coverage=1 00:16:19.435 --rc genhtml_legend=1 00:16:19.435 --rc geninfo_all_blocks=1 00:16:19.435 --rc geninfo_unexecuted_blocks=1 00:16:19.435 00:16:19.435 ' 00:16:19.435 13:35:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:19.435 13:35:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67407 00:16:19.435 13:35:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67407 00:16:19.435 13:35:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67444 00:16:19.435 13:35:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:16:19.435 13:35:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:16:19.435 13:35:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67444 00:16:19.435 13:35:31 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67444 ']' 00:16:19.435 13:35:31 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.435 13:35:31 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.435 13:35:31 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.435 13:35:31 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.435 13:35:31 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:16:19.693 [2024-11-20 13:35:31.460953] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:16:19.693 [2024-11-20 13:35:31.461146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67444 ] 00:16:19.693 [2024-11-20 13:35:31.644812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:19.952 [2024-11-20 13:35:31.766276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.952 [2024-11-20 13:35:31.766312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.887 13:35:32 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:20.887 Checking default timeout settings: 00:16:20.887 13:35:32 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:16:20.887 13:35:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:16:20.887 13:35:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:21.144 Making settings changes with rpc: 00:16:21.144 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:16:21.144 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:16:21.402 Check default vs. modified settings: 00:16:21.402 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:16:21.402 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:21.968 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:16:21.968 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:16:21.968 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67407 00:16:21.968 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67407 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:16:21.969 Setting action_on_timeout is changed as expected. 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67407 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67407 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:16:21.969 Setting timeout_us is changed as expected. 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67407 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67407 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:16:21.969 Setting timeout_admin_us is changed as expected. 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67407 /tmp/settings_modified_67407 00:16:21.969 13:35:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67444 00:16:21.969 13:35:33 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67444 ']' 00:16:21.969 13:35:33 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67444 00:16:21.969 13:35:33 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:16:21.969 13:35:33 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.969 13:35:33 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67444 00:16:21.969 13:35:33 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:21.969 killing process with pid 67444 00:16:21.969 13:35:33 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:21.969 13:35:33 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67444' 00:16:21.969 13:35:33 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67444 00:16:21.969 13:35:33 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67444 00:16:24.525 RPC TIMEOUT SETTING TEST PASSED. 00:16:24.525 13:35:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:16:24.525 00:16:24.525 real 0m5.192s 00:16:24.525 user 0m9.982s 00:16:24.525 sys 0m0.805s 00:16:24.525 ************************************ 00:16:24.525 END TEST nvme_rpc_timeouts 00:16:24.525 ************************************ 00:16:24.525 13:35:36 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.525 13:35:36 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:16:24.525 13:35:36 -- spdk/autotest.sh@239 -- # uname -s 00:16:24.525 13:35:36 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:16:24.525 13:35:36 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:16:24.525 13:35:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:24.525 13:35:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.525 13:35:36 -- common/autotest_common.sh@10 -- # set +x 00:16:24.525 ************************************ 00:16:24.525 START TEST sw_hotplug 00:16:24.525 ************************************ 00:16:24.525 13:35:36 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:16:24.784 * Looking for test storage... 00:16:24.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:24.784 13:35:36 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:24.784 13:35:36 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:16:24.784 13:35:36 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:24.784 13:35:36 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:24.784 13:35:36 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:16:24.784 13:35:36 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.785 13:35:36 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:24.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.785 --rc genhtml_branch_coverage=1 00:16:24.785 --rc genhtml_function_coverage=1 00:16:24.785 --rc genhtml_legend=1 00:16:24.785 --rc geninfo_all_blocks=1 00:16:24.785 --rc geninfo_unexecuted_blocks=1 00:16:24.785 00:16:24.785 ' 00:16:24.785 13:35:36 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:24.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.785 --rc genhtml_branch_coverage=1 00:16:24.785 --rc genhtml_function_coverage=1 00:16:24.785 --rc genhtml_legend=1 00:16:24.785 --rc geninfo_all_blocks=1 00:16:24.785 --rc geninfo_unexecuted_blocks=1 00:16:24.785 00:16:24.785 ' 00:16:24.785 13:35:36 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:24.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.785 --rc genhtml_branch_coverage=1 00:16:24.785 --rc genhtml_function_coverage=1 00:16:24.785 --rc genhtml_legend=1 00:16:24.785 --rc geninfo_all_blocks=1 00:16:24.785 --rc geninfo_unexecuted_blocks=1 00:16:24.785 00:16:24.785 ' 00:16:24.785 13:35:36 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:24.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.785 --rc genhtml_branch_coverage=1 00:16:24.785 --rc genhtml_function_coverage=1 00:16:24.785 --rc genhtml_legend=1 00:16:24.785 --rc geninfo_all_blocks=1 00:16:24.785 --rc geninfo_unexecuted_blocks=1 00:16:24.785 00:16:24.785 ' 00:16:24.785 13:35:36 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:25.351 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:25.610 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:25.610 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:25.610 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:25.610 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:25.610 13:35:37 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:16:25.610 13:35:37 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:16:25.610 13:35:37 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:16:25.610 13:35:37 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@233 -- # local class 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:25.610 13:35:37 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@18 -- # local i 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:25.611 13:35:37 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:25.870 13:35:37 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:16:25.870 13:35:37 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:25.870 13:35:37 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:25.870 13:35:37 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:25.870 13:35:37 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:25.870 13:35:37 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:16:25.870 13:35:37 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:25.870 13:35:37 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:25.870 13:35:37 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:25.870 13:35:37 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:16:25.870 13:35:37 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:16:25.870 13:35:37 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:16:25.870 13:35:37 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:16:25.870 13:35:37 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:16:25.870 13:35:37 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:16:25.870 13:35:37 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:25.870 13:35:37 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:16:25.870 13:35:37 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:16:25.870 13:35:37 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:26.434 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:26.693 Waiting for block devices as requested 00:16:26.693 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:26.693 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:26.952 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:26.952 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:32.239 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:32.239 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:16:32.239 13:35:43 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:32.807 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:16:32.807 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:32.807 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:16:33.066 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:16:33.326 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:33.326 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:33.586 13:35:45 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:16:33.586 13:35:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:33.586 13:35:45 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:16:33.586 13:35:45 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:16:33.586 13:35:45 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68335 00:16:33.586 13:35:45 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:16:33.586 13:35:45 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:16:33.586 13:35:45 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:16:33.586 13:35:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:16:33.586 13:35:45 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:16:33.586 13:35:45 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:16:33.586 13:35:45 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:16:33.586 13:35:45 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:16:33.586 13:35:45 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:16:33.586 13:35:45 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:16:33.586 13:35:45 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:16:33.586 13:35:45 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:16:33.586 13:35:45 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:16:33.586 13:35:45 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:16:33.844 Initializing NVMe Controllers 00:16:33.844 Attaching to 0000:00:10.0 00:16:33.845 Attaching to 0000:00:11.0 00:16:33.845 Attached to 0000:00:11.0 00:16:33.845 Attached to 0000:00:10.0 00:16:33.845 Initialization complete. Starting I/O... 00:16:33.845 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:16:33.845 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:16:33.845 00:16:34.781 QEMU NVMe Ctrl (12341 ): 1428 I/Os completed (+1428) 00:16:34.781 QEMU NVMe Ctrl (12340 ): 1431 I/Os completed (+1431) 00:16:34.781 00:16:36.157 QEMU NVMe Ctrl (12341 ): 3360 I/Os completed (+1932) 00:16:36.157 QEMU NVMe Ctrl (12340 ): 3365 I/Os completed (+1934) 00:16:36.157 00:16:37.093 QEMU NVMe Ctrl (12341 ): 5348 I/Os completed (+1988) 00:16:37.093 QEMU NVMe Ctrl (12340 ): 5362 I/Os completed (+1997) 00:16:37.093 00:16:38.039 QEMU NVMe Ctrl (12341 ): 7356 I/Os completed (+2008) 00:16:38.039 QEMU NVMe Ctrl (12340 ): 7379 I/Os completed (+2017) 00:16:38.039 00:16:38.974 QEMU NVMe Ctrl (12341 ): 9192 I/Os completed (+1836) 00:16:38.974 QEMU NVMe Ctrl (12340 ): 9215 I/Os completed (+1836) 00:16:38.974 00:16:39.542 13:35:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:39.542 13:35:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:39.542 13:35:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:39.543 [2024-11-20 13:35:51.488313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:39.543 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:39.543 [2024-11-20 13:35:51.490237] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.543 [2024-11-20 13:35:51.490303] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.543 [2024-11-20 13:35:51.490327] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.543 [2024-11-20 13:35:51.490350] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.543 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:39.543 [2024-11-20 13:35:51.493046] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.543 [2024-11-20 13:35:51.493101] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.543 [2024-11-20 13:35:51.493121] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.543 [2024-11-20 13:35:51.493141] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.802 13:35:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:39.802 13:35:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:39.802 [2024-11-20 13:35:51.527384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:39.802 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:39.802 [2024-11-20 13:35:51.529060] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.802 [2024-11-20 13:35:51.529110] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.802 [2024-11-20 13:35:51.529138] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.802 [2024-11-20 13:35:51.529161] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.802 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:39.802 [2024-11-20 13:35:51.531832] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.802 [2024-11-20 13:35:51.531875] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.802 [2024-11-20 13:35:51.531898] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.802 [2024-11-20 13:35:51.531916] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.802 13:35:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:39.802 13:35:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:39.802 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:16:39.802 EAL: Scan for (pci) bus failed. 00:16:39.802 13:35:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:39.802 13:35:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:39.802 13:35:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:39.802 00:16:39.802 13:35:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:40.060 13:35:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:40.060 13:35:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:40.060 13:35:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:40.060 13:35:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:40.060 Attaching to 0000:00:10.0 00:16:40.060 Attached to 0000:00:10.0 00:16:40.060 13:35:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:40.060 13:35:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:40.060 13:35:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:40.060 Attaching to 0000:00:11.0 00:16:40.060 Attached to 0000:00:11.0 00:16:40.997 QEMU NVMe Ctrl (12340 ): 1721 I/Os completed (+1721) 00:16:40.997 QEMU NVMe Ctrl (12341 ): 1631 I/Os completed (+1631) 00:16:40.997 00:16:41.931 QEMU NVMe Ctrl (12340 ): 3673 I/Os completed (+1952) 00:16:41.931 QEMU NVMe Ctrl (12341 ): 3585 I/Os completed (+1954) 00:16:41.931 00:16:42.867 QEMU NVMe Ctrl (12340 ): 5497 I/Os completed (+1824) 00:16:42.867 QEMU NVMe Ctrl (12341 ): 5545 I/Os completed (+1960) 00:16:42.867 00:16:43.804 QEMU NVMe Ctrl (12340 ): 7214 I/Os completed (+1717) 00:16:43.804 QEMU NVMe Ctrl (12341 ): 7281 I/Os completed (+1736) 00:16:43.804 00:16:45.187 QEMU NVMe Ctrl (12340 ): 8792 I/Os completed (+1578) 00:16:45.187 QEMU NVMe Ctrl (12341 ): 8882 I/Os completed (+1601) 00:16:45.187 00:16:45.756 QEMU NVMe Ctrl (12340 ): 10528 I/Os completed (+1736) 00:16:45.756 QEMU NVMe Ctrl (12341 ): 10662 I/Os completed (+1780) 00:16:45.756 00:16:47.135 QEMU NVMe Ctrl (12340 ): 12244 I/Os completed (+1716) 00:16:47.135 QEMU NVMe Ctrl (12341 ): 12419 I/Os completed (+1757) 00:16:47.135 00:16:48.074 QEMU NVMe Ctrl (12340 ): 13988 I/Os completed (+1744) 00:16:48.074 QEMU NVMe Ctrl (12341 ): 14181 I/Os completed (+1762) 00:16:48.074 00:16:49.049 QEMU NVMe Ctrl (12340 ): 15828 I/Os completed (+1840) 00:16:49.049 QEMU NVMe Ctrl (12341 ): 16035 I/Os completed (+1854) 00:16:49.049 00:16:49.983 QEMU NVMe Ctrl (12340 ): 17864 I/Os completed (+2036) 00:16:49.983 QEMU NVMe Ctrl (12341 ): 18071 I/Os completed (+2036) 00:16:49.983 00:16:50.919 QEMU NVMe Ctrl (12340 ): 19532 I/Os completed (+1668) 00:16:50.919 QEMU NVMe Ctrl (12341 ): 19763 I/Os completed (+1692) 00:16:50.919 00:16:51.855 QEMU NVMe Ctrl (12340 ): 21099 I/Os completed (+1567) 00:16:51.855 QEMU NVMe Ctrl (12341 ): 21394 I/Os completed (+1631) 00:16:51.855 00:16:52.114 13:36:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:52.114 13:36:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:52.114 13:36:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:52.114 13:36:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:52.114 [2024-11-20 13:36:03.897136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:52.114 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:52.114 [2024-11-20 13:36:03.899087] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.114 [2024-11-20 13:36:03.899149] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.114 [2024-11-20 13:36:03.899181] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.114 [2024-11-20 13:36:03.899206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.114 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:52.114 [2024-11-20 13:36:03.902553] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.114 [2024-11-20 13:36:03.902629] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.114 [2024-11-20 13:36:03.902657] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.114 [2024-11-20 13:36:03.902685] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.114 13:36:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:52.114 13:36:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:52.114 [2024-11-20 13:36:03.937320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:52.114 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:52.114 [2024-11-20 13:36:03.939154] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.114 [2024-11-20 13:36:03.939207] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.114 [2024-11-20 13:36:03.939238] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.114 [2024-11-20 13:36:03.939260] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.114 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:52.114 [2024-11-20 13:36:03.942364] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.114 [2024-11-20 13:36:03.942410] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.114 [2024-11-20 13:36:03.942436] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.114 [2024-11-20 13:36:03.942457] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:52.114 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:16:52.114 EAL: Scan for (pci) bus failed. 00:16:52.114 13:36:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:52.114 13:36:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:52.373 13:36:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:52.373 13:36:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:52.373 13:36:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:52.373 13:36:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:52.373 13:36:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:52.373 13:36:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:52.373 13:36:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:52.373 13:36:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:52.373 Attaching to 0000:00:10.0 00:16:52.373 Attached to 0000:00:10.0 00:16:52.373 13:36:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:52.373 13:36:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:52.373 13:36:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:52.373 Attaching to 0000:00:11.0 00:16:52.373 Attached to 0000:00:11.0 00:16:52.941 QEMU NVMe Ctrl (12340 ): 772 I/Os completed (+772) 00:16:52.941 QEMU NVMe Ctrl (12341 ): 588 I/Os completed (+588) 00:16:52.942 00:16:53.879 QEMU NVMe Ctrl (12340 ): 2500 I/Os completed (+1728) 00:16:53.879 QEMU NVMe Ctrl (12341 ): 2325 I/Os completed (+1737) 00:16:53.879 00:16:54.815 QEMU NVMe Ctrl (12340 ): 4384 I/Os completed (+1884) 00:16:54.815 QEMU NVMe Ctrl (12341 ): 4209 I/Os completed (+1884) 00:16:54.815 00:16:55.751 QEMU NVMe Ctrl (12340 ): 6014 I/Os completed (+1630) 00:16:55.751 QEMU NVMe Ctrl (12341 ): 5862 I/Os completed (+1653) 00:16:55.751 00:16:57.128 QEMU NVMe Ctrl (12340 ): 7716 I/Os completed (+1702) 00:16:57.128 QEMU NVMe Ctrl (12341 ): 7697 I/Os completed (+1835) 00:16:57.128 00:16:58.065 QEMU NVMe Ctrl (12340 ): 9372 I/Os completed (+1656) 00:16:58.065 QEMU NVMe Ctrl (12341 ): 9364 I/Os completed (+1667) 00:16:58.065 00:16:59.002 QEMU NVMe Ctrl (12340 ): 10935 I/Os completed (+1563) 00:16:59.002 QEMU NVMe Ctrl (12341 ): 10946 I/Os completed (+1582) 00:16:59.002 00:16:59.938 QEMU NVMe Ctrl (12340 ): 12763 I/Os completed (+1828) 00:16:59.938 QEMU NVMe Ctrl (12341 ): 12777 I/Os completed (+1831) 00:16:59.938 00:17:00.873 QEMU NVMe Ctrl (12340 ): 14395 I/Os completed (+1632) 00:17:00.873 QEMU NVMe Ctrl (12341 ): 14427 I/Os completed (+1650) 00:17:00.873 00:17:01.808 QEMU NVMe Ctrl (12340 ): 16443 I/Os completed (+2048) 00:17:01.808 QEMU NVMe Ctrl (12341 ): 16475 I/Os completed (+2048) 00:17:01.808 00:17:02.744 QEMU NVMe Ctrl (12340 ): 18475 I/Os completed (+2032) 00:17:02.744 QEMU NVMe Ctrl (12341 ): 18507 I/Os completed (+2032) 00:17:02.744 00:17:04.121 QEMU NVMe Ctrl (12340 ): 20463 I/Os completed (+1988) 00:17:04.121 QEMU NVMe Ctrl (12341 ): 20495 I/Os completed (+1988) 00:17:04.121 00:17:04.380 13:36:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:17:04.380 13:36:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:04.380 13:36:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:04.380 13:36:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:04.380 [2024-11-20 13:36:16.307316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:04.380 Controller removed: QEMU NVMe Ctrl (12340 ) 00:17:04.380 [2024-11-20 13:36:16.309238] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.380 [2024-11-20 13:36:16.309303] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.380 [2024-11-20 13:36:16.309327] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.380 [2024-11-20 13:36:16.309353] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.380 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:17:04.380 [2024-11-20 13:36:16.312541] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.380 [2024-11-20 13:36:16.312610] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.380 [2024-11-20 13:36:16.312633] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.380 [2024-11-20 13:36:16.312654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.638 13:36:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:04.638 13:36:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:04.638 [2024-11-20 13:36:16.345080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:04.638 Controller removed: QEMU NVMe Ctrl (12341 ) 00:17:04.638 [2024-11-20 13:36:16.347141] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.638 [2024-11-20 13:36:16.347212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.638 [2024-11-20 13:36:16.347238] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.638 [2024-11-20 13:36:16.347262] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.638 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:17:04.638 [2024-11-20 13:36:16.350083] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.638 [2024-11-20 13:36:16.350143] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.638 [2024-11-20 13:36:16.350182] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.638 [2024-11-20 13:36:16.350215] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:04.638 13:36:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:17:04.638 13:36:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:04.638 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:17:04.638 EAL: Scan for (pci) bus failed. 00:17:04.638 13:36:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:04.638 13:36:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:04.638 13:36:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:04.638 13:36:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:04.638 13:36:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:04.638 13:36:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:04.638 13:36:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:04.638 13:36:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:04.638 Attaching to 0000:00:10.0 00:17:04.896 Attached to 0000:00:10.0 00:17:04.896 QEMU NVMe Ctrl (12340 ): 152 I/Os completed (+152) 00:17:04.896 00:17:04.897 13:36:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:04.897 13:36:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:04.897 13:36:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:04.897 Attaching to 0000:00:11.0 00:17:04.897 Attached to 0000:00:11.0 00:17:04.897 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:17:04.897 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:17:04.897 [2024-11-20 13:36:16.710422] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:17:17.096 13:36:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:17:17.096 13:36:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:17.096 13:36:28 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.22 00:17:17.096 13:36:28 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.22 00:17:17.096 13:36:28 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:17:17.096 13:36:28 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.22 00:17:17.096 13:36:28 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.22 2 00:17:17.096 remove_attach_helper took 43.22s to complete (handling 2 nvme drive(s)) 13:36:28 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:17:23.668 13:36:34 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68335 00:17:23.668 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68335) - No such process 00:17:23.668 13:36:34 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68335 00:17:23.668 13:36:34 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:17:23.668 13:36:34 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:17:23.668 13:36:34 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:17:23.668 13:36:34 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68878 00:17:23.668 13:36:34 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:23.668 13:36:34 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:17:23.668 13:36:34 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68878 00:17:23.668 13:36:34 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68878 ']' 00:17:23.668 13:36:34 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.668 13:36:34 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.668 13:36:34 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.668 13:36:34 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.668 13:36:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:23.668 [2024-11-20 13:36:34.830568] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:17:23.668 [2024-11-20 13:36:34.830997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68878 ] 00:17:23.668 [2024-11-20 13:36:35.016676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.668 [2024-11-20 13:36:35.139218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.236 13:36:36 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.236 13:36:36 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:17:24.236 13:36:36 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:17:24.236 13:36:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.236 13:36:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:24.236 13:36:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.236 13:36:36 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:17:24.236 13:36:36 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:17:24.236 13:36:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:17:24.236 13:36:36 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:17:24.236 13:36:36 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:17:24.236 13:36:36 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:17:24.236 13:36:36 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:17:24.236 13:36:36 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:17:24.236 13:36:36 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:17:24.236 13:36:36 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:17:24.236 13:36:36 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:17:24.236 13:36:36 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:17:24.236 13:36:36 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:17:30.813 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:30.813 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:30.813 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:30.813 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:30.813 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:30.813 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:30.813 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:30.813 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:30.813 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:30.813 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:30.813 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:30.813 13:36:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.813 13:36:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:30.813 [2024-11-20 13:36:42.137304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:30.813 [2024-11-20 13:36:42.140016] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.813 [2024-11-20 13:36:42.140067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.813 [2024-11-20 13:36:42.140087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.813 [2024-11-20 13:36:42.140134] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.813 [2024-11-20 13:36:42.140152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.813 [2024-11-20 13:36:42.140168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.813 [2024-11-20 13:36:42.140182] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.813 [2024-11-20 13:36:42.140197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.813 [2024-11-20 13:36:42.140210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.813 [2024-11-20 13:36:42.140231] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.813 [2024-11-20 13:36:42.140244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.813 [2024-11-20 13:36:42.140259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.813 13:36:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.813 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:17:30.813 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:30.813 [2024-11-20 13:36:42.636507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:30.813 [2024-11-20 13:36:42.639344] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.814 [2024-11-20 13:36:42.639394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.814 [2024-11-20 13:36:42.639415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.814 [2024-11-20 13:36:42.639440] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.814 [2024-11-20 13:36:42.639455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.814 [2024-11-20 13:36:42.639467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.814 [2024-11-20 13:36:42.639483] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.814 [2024-11-20 13:36:42.639495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.814 [2024-11-20 13:36:42.639509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.814 [2024-11-20 13:36:42.639522] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:30.814 [2024-11-20 13:36:42.639535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.814 [2024-11-20 13:36:42.639548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.814 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:17:30.814 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:30.814 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:30.814 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:30.814 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:30.814 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:30.814 13:36:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.814 13:36:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:30.814 13:36:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.814 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:30.814 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:31.072 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:31.072 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:31.072 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:31.072 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:31.072 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:31.072 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:31.072 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:31.072 13:36:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:31.329 13:36:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:31.329 13:36:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:31.329 13:36:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:43.580 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:43.580 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:43.580 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:43.580 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:43.580 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:43.580 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:43.580 13:36:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.580 13:36:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:43.580 13:36:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.580 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:43.580 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:43.580 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:43.580 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:43.580 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:43.580 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:43.580 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:43.580 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:43.580 [2024-11-20 13:36:55.216422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:43.580 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:43.580 [2024-11-20 13:36:55.219328] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.580 [2024-11-20 13:36:55.219377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.581 [2024-11-20 13:36:55.219395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.581 [2024-11-20 13:36:55.219426] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.581 [2024-11-20 13:36:55.219439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.581 [2024-11-20 13:36:55.219455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.581 [2024-11-20 13:36:55.219469] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.581 [2024-11-20 13:36:55.219484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.581 [2024-11-20 13:36:55.219497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.581 [2024-11-20 13:36:55.219513] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.581 [2024-11-20 13:36:55.219526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.581 [2024-11-20 13:36:55.219541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.581 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:43.581 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:43.581 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:43.581 13:36:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.581 13:36:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:43.581 13:36:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.581 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:17:43.581 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:43.839 [2024-11-20 13:36:55.615778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:43.839 [2024-11-20 13:36:55.618825] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.839 [2024-11-20 13:36:55.618883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.839 [2024-11-20 13:36:55.618909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.839 [2024-11-20 13:36:55.618937] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.840 [2024-11-20 13:36:55.618954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.840 [2024-11-20 13:36:55.618967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.840 [2024-11-20 13:36:55.619002] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.840 [2024-11-20 13:36:55.619016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.840 [2024-11-20 13:36:55.619032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.840 [2024-11-20 13:36:55.619046] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:43.840 [2024-11-20 13:36:55.619061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.840 [2024-11-20 13:36:55.619074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.840 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:17:43.840 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:43.840 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:43.840 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:43.840 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:43.840 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:43.840 13:36:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.840 13:36:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:43.840 13:36:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.098 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:44.098 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:44.098 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:44.098 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:44.098 13:36:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:44.098 13:36:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:44.098 13:36:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:44.098 13:36:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:44.098 13:36:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:44.098 13:36:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:44.356 13:36:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:44.356 13:36:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:44.356 13:36:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:56.554 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:56.554 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:56.554 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:56.554 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:56.554 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:56.554 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:56.555 13:37:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.555 13:37:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:56.555 13:37:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.555 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:56.555 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:56.555 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:56.555 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:56.555 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:56.555 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:56.555 [2024-11-20 13:37:08.295392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:56.555 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:56.555 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:56.555 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:17:56.555 EAL: Scan for (pci) bus failed. 00:17:56.555 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:56.555 [2024-11-20 13:37:08.298488] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.555 [2024-11-20 13:37:08.298532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.555 [2024-11-20 13:37:08.298550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.555 [2024-11-20 13:37:08.298582] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.555 [2024-11-20 13:37:08.298596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.555 [2024-11-20 13:37:08.298626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.555 [2024-11-20 13:37:08.298643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.555 [2024-11-20 13:37:08.298658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.555 [2024-11-20 13:37:08.298672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.555 [2024-11-20 13:37:08.298688] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.555 [2024-11-20 13:37:08.298701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.555 [2024-11-20 13:37:08.298717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.555 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:56.555 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:56.555 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:56.555 13:37:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.555 13:37:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:56.555 13:37:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.555 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:17:56.555 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:56.814 [2024-11-20 13:37:08.694771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:56.814 [2024-11-20 13:37:08.697758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.814 [2024-11-20 13:37:08.698007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.814 [2024-11-20 13:37:08.698046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.814 [2024-11-20 13:37:08.698078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.814 [2024-11-20 13:37:08.698095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.814 [2024-11-20 13:37:08.698109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.814 [2024-11-20 13:37:08.698138] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.814 [2024-11-20 13:37:08.698151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.814 [2024-11-20 13:37:08.698187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.814 [2024-11-20 13:37:08.698201] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:56.814 [2024-11-20 13:37:08.698217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.814 [2024-11-20 13:37:08.698230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.072 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:17:57.072 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:57.072 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:57.072 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:57.072 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:57.072 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:57.072 13:37:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.072 13:37:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:57.072 13:37:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.072 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:57.072 13:37:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:57.351 13:37:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:57.351 13:37:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:57.351 13:37:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:57.351 13:37:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:57.351 13:37:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:57.351 13:37:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:57.351 13:37:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:57.351 13:37:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:57.351 13:37:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:57.351 13:37:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:57.351 13:37:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:09.650 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:09.650 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:09.650 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:09.650 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:09.650 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:09.650 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:09.650 13:37:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.650 13:37:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:09.650 13:37:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.650 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:09.650 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:09.650 13:37:21 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.28 00:18:09.650 13:37:21 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.28 00:18:09.650 13:37:21 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:18:09.650 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.28 00:18:09.650 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.28 2 00:18:09.650 remove_attach_helper took 45.28s to complete (handling 2 nvme drive(s)) 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:18:09.650 13:37:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.650 13:37:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:09.650 13:37:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.650 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:18:09.650 13:37:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.650 13:37:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:09.650 13:37:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.650 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:18:09.650 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:18:09.650 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:18:09.650 13:37:21 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:18:09.650 13:37:21 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:18:09.650 13:37:21 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:18:09.650 13:37:21 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:18:09.650 13:37:21 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:18:09.650 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:18:09.650 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:18:09.650 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:18:09.650 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:18:09.650 13:37:21 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:18:16.221 13:37:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:16.221 13:37:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:16.221 13:37:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:16.221 13:37:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:16.221 13:37:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:16.221 13:37:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:16.221 13:37:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:16.221 13:37:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:16.221 13:37:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:16.221 13:37:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:16.221 13:37:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:16.221 13:37:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.221 13:37:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:16.221 [2024-11-20 13:37:27.468559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:16.221 [2024-11-20 13:37:27.471056] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:16.221 [2024-11-20 13:37:27.471107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.221 [2024-11-20 13:37:27.471127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.221 [2024-11-20 13:37:27.471156] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:16.221 [2024-11-20 13:37:27.471169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.221 [2024-11-20 13:37:27.471184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.221 [2024-11-20 13:37:27.471198] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:16.221 [2024-11-20 13:37:27.471212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.221 [2024-11-20 13:37:27.471224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.221 [2024-11-20 13:37:27.471242] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:16.221 [2024-11-20 13:37:27.471254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.221 [2024-11-20 13:37:27.471271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.221 13:37:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.221 13:37:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:18:16.221 13:37:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:16.221 [2024-11-20 13:37:27.867906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:16.221 [2024-11-20 13:37:27.870503] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:16.221 [2024-11-20 13:37:27.870552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.221 [2024-11-20 13:37:27.870573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.221 [2024-11-20 13:37:27.870614] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:16.221 [2024-11-20 13:37:27.870631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.221 [2024-11-20 13:37:27.870643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.221 [2024-11-20 13:37:27.870660] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:16.221 [2024-11-20 13:37:27.870672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.221 [2024-11-20 13:37:27.870687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.221 [2024-11-20 13:37:27.870701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:16.221 [2024-11-20 13:37:27.870715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.221 [2024-11-20 13:37:27.870727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.221 13:37:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:18:16.221 13:37:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:16.221 13:37:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:16.221 13:37:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:16.221 13:37:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:16.221 13:37:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:16.221 13:37:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.221 13:37:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:16.221 13:37:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.221 13:37:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:16.221 13:37:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:16.221 13:37:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:16.221 13:37:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:16.221 13:37:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:16.480 13:37:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:16.480 13:37:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:16.480 13:37:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:16.480 13:37:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:16.480 13:37:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:16.480 13:37:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:16.480 13:37:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:16.480 13:37:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:28.690 13:37:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.690 13:37:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:28.690 13:37:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:28.690 [2024-11-20 13:37:40.447687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:28.690 [2024-11-20 13:37:40.451072] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:28.690 [2024-11-20 13:37:40.451278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.690 [2024-11-20 13:37:40.451413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.690 [2024-11-20 13:37:40.451669] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:28.690 [2024-11-20 13:37:40.451714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.690 [2024-11-20 13:37:40.451833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.690 [2024-11-20 13:37:40.451902] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:28.690 [2024-11-20 13:37:40.451943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.690 [2024-11-20 13:37:40.452082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.690 [2024-11-20 13:37:40.452185] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:28.690 [2024-11-20 13:37:40.452229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.690 [2024-11-20 13:37:40.452401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:28.690 13:37:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.690 13:37:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:28.690 13:37:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:18:28.690 13:37:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:29.004 [2024-11-20 13:37:40.847041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:29.004 [2024-11-20 13:37:40.852725] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:29.004 [2024-11-20 13:37:40.852785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.004 [2024-11-20 13:37:40.852807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.004 [2024-11-20 13:37:40.852836] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:29.004 [2024-11-20 13:37:40.852856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.004 [2024-11-20 13:37:40.852869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.004 [2024-11-20 13:37:40.852901] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:29.004 [2024-11-20 13:37:40.852913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.004 [2024-11-20 13:37:40.852928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.004 [2024-11-20 13:37:40.852941] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:29.005 [2024-11-20 13:37:40.852956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.005 [2024-11-20 13:37:40.852967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.264 13:37:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:18:29.264 13:37:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:29.264 13:37:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:29.264 13:37:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:29.264 13:37:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:29.264 13:37:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:29.264 13:37:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.264 13:37:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:29.264 13:37:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.264 13:37:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:29.264 13:37:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:29.264 13:37:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:29.264 13:37:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:29.264 13:37:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:29.522 13:37:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:29.522 13:37:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:29.522 13:37:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:29.522 13:37:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:29.522 13:37:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:29.522 13:37:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:29.522 13:37:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:29.522 13:37:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:41.745 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:41.745 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:41.745 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:41.745 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:41.745 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:41.745 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:41.745 13:37:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.745 13:37:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:41.745 13:37:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.745 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:41.746 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:41.746 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:41.746 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:41.746 [2024-11-20 13:37:53.526646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:41.746 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:41.746 [2024-11-20 13:37:53.529148] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:41.746 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:41.746 [2024-11-20 13:37:53.529419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.746 [2024-11-20 13:37:53.529493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.746 [2024-11-20 13:37:53.529566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:41.746 [2024-11-20 13:37:53.529627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.746 [2024-11-20 13:37:53.529683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.746 [2024-11-20 13:37:53.529791] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:41.746 [2024-11-20 13:37:53.529820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.746 [2024-11-20 13:37:53.529834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.746 [2024-11-20 13:37:53.529850] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:41.746 [2024-11-20 13:37:53.529862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.746 [2024-11-20 13:37:53.529876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.746 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:41.746 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:41.746 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:41.746 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:41.746 13:37:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.746 13:37:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:41.746 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:41.746 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:41.746 13:37:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.746 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:18:41.746 13:37:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:42.005 [2024-11-20 13:37:53.926017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:42.005 [2024-11-20 13:37:53.928028] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.005 [2024-11-20 13:37:53.928072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.005 [2024-11-20 13:37:53.928093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.005 [2024-11-20 13:37:53.928119] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.005 [2024-11-20 13:37:53.928134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.005 [2024-11-20 13:37:53.928146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.005 [2024-11-20 13:37:53.928163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.005 [2024-11-20 13:37:53.928175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.005 [2024-11-20 13:37:53.928193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.005 [2024-11-20 13:37:53.928206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:42.005 [2024-11-20 13:37:53.928225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.005 [2024-11-20 13:37:53.928237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.263 13:37:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:18:42.263 13:37:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:42.263 13:37:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:42.263 13:37:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:42.263 13:37:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:42.263 13:37:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:42.263 13:37:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.263 13:37:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:42.263 13:37:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.263 13:37:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:42.263 13:37:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:42.566 13:37:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:42.566 13:37:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:42.566 13:37:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:42.566 13:37:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:42.566 13:37:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:42.566 13:37:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:42.566 13:37:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:42.566 13:37:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:42.566 13:37:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:42.566 13:37:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:42.566 13:37:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:54.817 13:38:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:54.817 13:38:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:54.817 13:38:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:54.817 13:38:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:54.817 13:38:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:54.817 13:38:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:54.817 13:38:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.817 13:38:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:54.817 13:38:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.817 13:38:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:54.817 13:38:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:54.817 13:38:06 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.16 00:18:54.817 13:38:06 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.16 00:18:54.817 13:38:06 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:18:54.817 13:38:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.16 00:18:54.817 13:38:06 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.16 2 00:18:54.817 remove_attach_helper took 45.16s to complete (handling 2 nvme drive(s)) 13:38:06 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:18:54.817 13:38:06 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68878 00:18:54.817 13:38:06 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68878 ']' 00:18:54.817 13:38:06 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68878 00:18:54.817 13:38:06 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:18:54.817 13:38:06 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.817 13:38:06 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68878 00:18:54.817 13:38:06 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:54.817 13:38:06 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:54.817 13:38:06 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68878' 00:18:54.817 killing process with pid 68878 00:18:54.817 13:38:06 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68878 00:18:54.817 13:38:06 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68878 00:18:57.433 13:38:09 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:57.692 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:58.261 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:58.261 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:58.520 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:58.520 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:58.520 00:18:58.520 real 2m33.959s 00:18:58.520 user 1m52.305s 00:18:58.520 sys 0m22.016s 00:18:58.520 13:38:10 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.520 ************************************ 00:18:58.520 END TEST sw_hotplug 00:18:58.520 ************************************ 00:18:58.520 13:38:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:58.520 13:38:10 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:18:58.520 13:38:10 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:18:58.520 13:38:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:58.520 13:38:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.520 13:38:10 -- common/autotest_common.sh@10 -- # set +x 00:18:58.520 ************************************ 00:18:58.520 START TEST nvme_xnvme 00:18:58.520 ************************************ 00:18:58.520 13:38:10 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:18:58.779 * Looking for test storage... 00:18:58.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:58.779 13:38:10 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:58.780 13:38:10 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:18:58.780 13:38:10 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:58.780 13:38:10 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.780 13:38:10 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:18:58.780 13:38:10 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.780 13:38:10 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:58.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.780 --rc genhtml_branch_coverage=1 00:18:58.780 --rc genhtml_function_coverage=1 00:18:58.780 --rc genhtml_legend=1 00:18:58.780 --rc geninfo_all_blocks=1 00:18:58.780 --rc geninfo_unexecuted_blocks=1 00:18:58.780 00:18:58.780 ' 00:18:58.780 13:38:10 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:58.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.780 --rc genhtml_branch_coverage=1 00:18:58.780 --rc genhtml_function_coverage=1 00:18:58.780 --rc genhtml_legend=1 00:18:58.780 --rc geninfo_all_blocks=1 00:18:58.780 --rc geninfo_unexecuted_blocks=1 00:18:58.780 00:18:58.780 ' 00:18:58.780 13:38:10 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:58.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.780 --rc genhtml_branch_coverage=1 00:18:58.780 --rc genhtml_function_coverage=1 00:18:58.780 --rc genhtml_legend=1 00:18:58.780 --rc geninfo_all_blocks=1 00:18:58.780 --rc geninfo_unexecuted_blocks=1 00:18:58.780 00:18:58.780 ' 00:18:58.780 13:38:10 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:58.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.780 --rc genhtml_branch_coverage=1 00:18:58.780 --rc genhtml_function_coverage=1 00:18:58.780 --rc genhtml_legend=1 00:18:58.780 --rc geninfo_all_blocks=1 00:18:58.780 --rc geninfo_unexecuted_blocks=1 00:18:58.780 00:18:58.780 ' 00:18:58.780 13:38:10 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:18:58.780 13:38:10 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:18:58.780 13:38:10 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:18:58.780 13:38:10 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:18:58.780 13:38:10 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:18:58.780 13:38:10 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:18:58.780 13:38:10 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:18:58.780 13:38:10 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:18:58.780 13:38:10 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:18:58.780 13:38:10 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:18:58.780 13:38:10 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:18:58.781 13:38:10 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:18:59.042 13:38:10 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:18:59.042 13:38:10 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:18:59.042 13:38:10 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:18:59.042 13:38:10 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:18:59.042 13:38:10 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:18:59.042 13:38:10 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:18:59.042 13:38:10 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:18:59.042 13:38:10 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:18:59.042 13:38:10 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:18:59.042 13:38:10 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:18:59.042 13:38:10 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:18:59.042 13:38:10 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:18:59.042 13:38:10 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:59.042 13:38:10 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:59.042 13:38:10 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:18:59.042 13:38:10 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:18:59.042 13:38:10 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:18:59.042 13:38:10 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:18:59.042 13:38:10 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:18:59.042 13:38:10 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:18:59.042 13:38:10 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:18:59.042 13:38:10 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:18:59.042 13:38:10 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:18:59.042 13:38:10 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:18:59.042 13:38:10 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:18:59.042 13:38:10 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:18:59.042 13:38:10 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:18:59.042 13:38:10 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:18:59.042 #define SPDK_CONFIG_H 00:18:59.042 #define SPDK_CONFIG_AIO_FSDEV 1 00:18:59.042 #define SPDK_CONFIG_APPS 1 00:18:59.042 #define SPDK_CONFIG_ARCH native 00:18:59.042 #define SPDK_CONFIG_ASAN 1 00:18:59.042 #undef SPDK_CONFIG_AVAHI 00:18:59.042 #undef SPDK_CONFIG_CET 00:18:59.042 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:18:59.042 #define SPDK_CONFIG_COVERAGE 1 00:18:59.042 #define SPDK_CONFIG_CROSS_PREFIX 00:18:59.042 #undef SPDK_CONFIG_CRYPTO 00:18:59.042 #undef SPDK_CONFIG_CRYPTO_MLX5 00:18:59.042 #undef SPDK_CONFIG_CUSTOMOCF 00:18:59.042 #undef SPDK_CONFIG_DAOS 00:18:59.042 #define SPDK_CONFIG_DAOS_DIR 00:18:59.042 #define SPDK_CONFIG_DEBUG 1 00:18:59.042 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:18:59.042 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:18:59.042 #define SPDK_CONFIG_DPDK_INC_DIR 00:18:59.042 #define SPDK_CONFIG_DPDK_LIB_DIR 00:18:59.042 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:18:59.042 #undef SPDK_CONFIG_DPDK_UADK 00:18:59.042 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:59.042 #define SPDK_CONFIG_EXAMPLES 1 00:18:59.042 #undef SPDK_CONFIG_FC 00:18:59.042 #define SPDK_CONFIG_FC_PATH 00:18:59.042 #define SPDK_CONFIG_FIO_PLUGIN 1 00:18:59.042 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:18:59.042 #define SPDK_CONFIG_FSDEV 1 00:18:59.042 #undef SPDK_CONFIG_FUSE 00:18:59.042 #undef SPDK_CONFIG_FUZZER 00:18:59.042 #define SPDK_CONFIG_FUZZER_LIB 00:18:59.042 #undef SPDK_CONFIG_GOLANG 00:18:59.042 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:18:59.042 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:18:59.042 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:18:59.042 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:18:59.042 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:18:59.042 #undef SPDK_CONFIG_HAVE_LIBBSD 00:18:59.042 #undef SPDK_CONFIG_HAVE_LZ4 00:18:59.042 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:18:59.042 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:18:59.042 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:18:59.042 #define SPDK_CONFIG_IDXD 1 00:18:59.042 #define SPDK_CONFIG_IDXD_KERNEL 1 00:18:59.042 #undef SPDK_CONFIG_IPSEC_MB 00:18:59.042 #define SPDK_CONFIG_IPSEC_MB_DIR 00:18:59.042 #define SPDK_CONFIG_ISAL 1 00:18:59.042 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:18:59.042 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:18:59.042 #define SPDK_CONFIG_LIBDIR 00:18:59.042 #undef SPDK_CONFIG_LTO 00:18:59.042 #define SPDK_CONFIG_MAX_LCORES 128 00:18:59.042 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:18:59.042 #define SPDK_CONFIG_NVME_CUSE 1 00:18:59.042 #undef SPDK_CONFIG_OCF 00:18:59.042 #define SPDK_CONFIG_OCF_PATH 00:18:59.042 #define SPDK_CONFIG_OPENSSL_PATH 00:18:59.042 #undef SPDK_CONFIG_PGO_CAPTURE 00:18:59.042 #define SPDK_CONFIG_PGO_DIR 00:18:59.042 #undef SPDK_CONFIG_PGO_USE 00:18:59.042 #define SPDK_CONFIG_PREFIX /usr/local 00:18:59.042 #undef SPDK_CONFIG_RAID5F 00:18:59.042 #undef SPDK_CONFIG_RBD 00:18:59.042 #define SPDK_CONFIG_RDMA 1 00:18:59.042 #define SPDK_CONFIG_RDMA_PROV verbs 00:18:59.042 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:18:59.042 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:18:59.042 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:18:59.042 #define SPDK_CONFIG_SHARED 1 00:18:59.042 #undef SPDK_CONFIG_SMA 00:18:59.042 #define SPDK_CONFIG_TESTS 1 00:18:59.042 #undef SPDK_CONFIG_TSAN 00:18:59.042 #define SPDK_CONFIG_UBLK 1 00:18:59.042 #define SPDK_CONFIG_UBSAN 1 00:18:59.042 #undef SPDK_CONFIG_UNIT_TESTS 00:18:59.042 #undef SPDK_CONFIG_URING 00:18:59.042 #define SPDK_CONFIG_URING_PATH 00:18:59.042 #undef SPDK_CONFIG_URING_ZNS 00:18:59.042 #undef SPDK_CONFIG_USDT 00:18:59.042 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:18:59.042 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:18:59.042 #undef SPDK_CONFIG_VFIO_USER 00:18:59.042 #define SPDK_CONFIG_VFIO_USER_DIR 00:18:59.042 #define SPDK_CONFIG_VHOST 1 00:18:59.042 #define SPDK_CONFIG_VIRTIO 1 00:18:59.042 #undef SPDK_CONFIG_VTUNE 00:18:59.043 #define SPDK_CONFIG_VTUNE_DIR 00:18:59.043 #define SPDK_CONFIG_WERROR 1 00:18:59.043 #define SPDK_CONFIG_WPDK_DIR 00:18:59.043 #define SPDK_CONFIG_XNVME 1 00:18:59.043 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:18:59.043 13:38:10 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:59.043 13:38:10 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.043 13:38:10 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.043 13:38:10 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.043 13:38:10 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.043 13:38:10 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.043 13:38:10 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.043 13:38:10 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.043 13:38:10 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:18:59.043 13:38:10 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@68 -- # uname -s 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:18:59.043 13:38:10 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:18:59.043 13:38:10 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70223 ]] 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70223 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:18:59.044 13:38:10 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.RavqyN 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.RavqyN/tests/xnvme /tmp/spdk.RavqyN 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974908928 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593042944 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261657600 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266277888 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974908928 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593042944 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=92479610880 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=7223169024 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:18:59.045 * Looking for test storage... 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13974908928 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:59.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:59.045 13:38:10 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:59.045 13:38:10 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.045 13:38:10 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.045 13:38:10 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.045 13:38:10 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.045 13:38:10 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.045 13:38:10 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.045 13:38:10 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.045 13:38:10 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.045 13:38:10 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.045 13:38:10 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.045 13:38:10 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.046 13:38:10 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:59.046 13:38:10 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:18:59.046 13:38:10 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.046 13:38:10 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.046 13:38:10 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:59.046 13:38:10 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:59.305 13:38:10 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.305 13:38:10 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:59.305 13:38:10 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.305 13:38:10 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:59.305 13:38:11 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:59.305 13:38:11 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.305 13:38:11 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:59.305 13:38:11 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.305 13:38:11 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.305 13:38:11 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.305 13:38:11 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:18:59.305 13:38:11 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.305 13:38:11 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:59.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.305 --rc genhtml_branch_coverage=1 00:18:59.305 --rc genhtml_function_coverage=1 00:18:59.305 --rc genhtml_legend=1 00:18:59.305 --rc geninfo_all_blocks=1 00:18:59.305 --rc geninfo_unexecuted_blocks=1 00:18:59.305 00:18:59.305 ' 00:18:59.305 13:38:11 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:59.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.305 --rc genhtml_branch_coverage=1 00:18:59.305 --rc genhtml_function_coverage=1 00:18:59.305 --rc genhtml_legend=1 00:18:59.305 --rc geninfo_all_blocks=1 00:18:59.305 --rc geninfo_unexecuted_blocks=1 00:18:59.305 00:18:59.305 ' 00:18:59.305 13:38:11 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:59.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.305 --rc genhtml_branch_coverage=1 00:18:59.305 --rc genhtml_function_coverage=1 00:18:59.305 --rc genhtml_legend=1 00:18:59.305 --rc geninfo_all_blocks=1 00:18:59.305 --rc geninfo_unexecuted_blocks=1 00:18:59.305 00:18:59.305 ' 00:18:59.305 13:38:11 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:59.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.305 --rc genhtml_branch_coverage=1 00:18:59.305 --rc genhtml_function_coverage=1 00:18:59.305 --rc genhtml_legend=1 00:18:59.305 --rc geninfo_all_blocks=1 00:18:59.305 --rc geninfo_unexecuted_blocks=1 00:18:59.305 00:18:59.305 ' 00:18:59.305 13:38:11 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:59.305 13:38:11 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.305 13:38:11 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.305 13:38:11 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.305 13:38:11 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.305 13:38:11 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.305 13:38:11 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.305 13:38:11 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.305 13:38:11 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:18:59.305 13:38:11 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.305 13:38:11 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:18:59.305 13:38:11 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:18:59.305 13:38:11 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:18:59.305 13:38:11 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:18:59.305 13:38:11 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:18:59.305 13:38:11 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:18:59.305 13:38:11 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:18:59.305 13:38:11 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:18:59.305 13:38:11 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:18:59.305 13:38:11 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:18:59.306 13:38:11 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:18:59.306 13:38:11 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:18:59.306 13:38:11 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:18:59.306 13:38:11 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:18:59.306 13:38:11 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:18:59.306 13:38:11 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:18:59.306 13:38:11 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:18:59.306 13:38:11 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:18:59.306 13:38:11 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:18:59.306 13:38:11 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:18:59.306 13:38:11 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:18:59.306 13:38:11 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:59.875 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:00.134 Waiting for block devices as requested 00:19:00.134 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:00.134 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:00.393 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:19:00.393 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:19:05.698 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:19:05.698 13:38:17 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:19:05.958 13:38:17 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:19:05.958 13:38:17 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:19:06.217 13:38:18 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:19:06.217 13:38:18 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:19:06.217 13:38:18 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:19:06.217 13:38:18 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:19:06.217 13:38:18 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:19:06.217 No valid GPT data, bailing 00:19:06.217 13:38:18 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:06.217 13:38:18 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:19:06.217 13:38:18 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:19:06.217 13:38:18 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:19:06.217 13:38:18 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:19:06.217 13:38:18 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:19:06.217 13:38:18 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:19:06.217 13:38:18 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:19:06.217 13:38:18 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:19:06.217 13:38:18 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:19:06.217 13:38:18 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:19:06.217 13:38:18 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:19:06.217 13:38:18 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:19:06.217 13:38:18 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:06.217 13:38:18 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:19:06.217 13:38:18 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:19:06.217 13:38:18 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:06.217 13:38:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:06.217 13:38:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.217 13:38:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:06.217 ************************************ 00:19:06.217 START TEST xnvme_rpc 00:19:06.217 ************************************ 00:19:06.217 13:38:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:06.217 13:38:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:06.217 13:38:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:06.217 13:38:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:06.217 13:38:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:06.217 13:38:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.217 13:38:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70619 00:19:06.217 13:38:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70619 00:19:06.217 13:38:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70619 ']' 00:19:06.217 13:38:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.217 13:38:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.217 13:38:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.217 13:38:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.217 13:38:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:06.476 [2024-11-20 13:38:18.190095] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:19:06.476 [2024-11-20 13:38:18.190257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70619 ] 00:19:06.476 [2024-11-20 13:38:18.374841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.734 [2024-11-20 13:38:18.493494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.671 xnvme_bdev 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70619 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70619 ']' 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70619 00:19:07.671 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:07.672 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.672 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70619 00:19:07.930 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.930 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.930 killing process with pid 70619 00:19:07.930 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70619' 00:19:07.930 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70619 00:19:07.930 13:38:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70619 00:19:10.467 00:19:10.467 real 0m3.991s 00:19:10.467 user 0m4.118s 00:19:10.467 sys 0m0.561s 00:19:10.467 13:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.467 13:38:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.467 ************************************ 00:19:10.467 END TEST xnvme_rpc 00:19:10.467 ************************************ 00:19:10.467 13:38:22 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:10.467 13:38:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:10.467 13:38:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.467 13:38:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:10.467 ************************************ 00:19:10.467 START TEST xnvme_bdevperf 00:19:10.467 ************************************ 00:19:10.467 13:38:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:10.467 13:38:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:10.467 13:38:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:19:10.467 13:38:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:10.467 13:38:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:10.467 13:38:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:10.467 13:38:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:10.467 13:38:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:10.467 { 00:19:10.467 "subsystems": [ 00:19:10.467 { 00:19:10.467 "subsystem": "bdev", 00:19:10.467 "config": [ 00:19:10.467 { 00:19:10.467 "params": { 00:19:10.467 "io_mechanism": "libaio", 00:19:10.467 "conserve_cpu": false, 00:19:10.467 "filename": "/dev/nvme0n1", 00:19:10.467 "name": "xnvme_bdev" 00:19:10.467 }, 00:19:10.467 "method": "bdev_xnvme_create" 00:19:10.467 }, 00:19:10.467 { 00:19:10.467 "method": "bdev_wait_for_examine" 00:19:10.467 } 00:19:10.467 ] 00:19:10.467 } 00:19:10.467 ] 00:19:10.467 } 00:19:10.467 [2024-11-20 13:38:22.250164] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:19:10.467 [2024-11-20 13:38:22.250316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70704 ] 00:19:10.727 [2024-11-20 13:38:22.437778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.727 [2024-11-20 13:38:22.595130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.295 Running I/O for 5 seconds... 00:19:13.166 38073.00 IOPS, 148.72 MiB/s [2024-11-20T13:38:26.060Z] 38807.00 IOPS, 151.59 MiB/s [2024-11-20T13:38:27.072Z] 38114.67 IOPS, 148.89 MiB/s [2024-11-20T13:38:28.009Z] 38888.25 IOPS, 151.91 MiB/s 00:19:16.052 Latency(us) 00:19:16.052 [2024-11-20T13:38:28.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.052 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:16.052 xnvme_bdev : 5.00 38661.09 151.02 0.00 0.00 1651.79 189.17 5421.85 00:19:16.052 [2024-11-20T13:38:28.009Z] =================================================================================================================== 00:19:16.052 [2024-11-20T13:38:28.009Z] Total : 38661.09 151.02 0.00 0.00 1651.79 189.17 5421.85 00:19:17.431 13:38:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:17.431 13:38:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:17.431 13:38:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:17.431 13:38:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:17.431 13:38:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:17.431 { 00:19:17.431 "subsystems": [ 00:19:17.431 { 00:19:17.431 "subsystem": "bdev", 00:19:17.431 "config": [ 00:19:17.431 { 00:19:17.431 "params": { 00:19:17.431 "io_mechanism": "libaio", 00:19:17.431 "conserve_cpu": false, 00:19:17.431 "filename": "/dev/nvme0n1", 00:19:17.431 "name": "xnvme_bdev" 00:19:17.431 }, 00:19:17.431 "method": "bdev_xnvme_create" 00:19:17.431 }, 00:19:17.431 { 00:19:17.431 "method": "bdev_wait_for_examine" 00:19:17.431 } 00:19:17.431 ] 00:19:17.431 } 00:19:17.431 ] 00:19:17.431 } 00:19:17.431 [2024-11-20 13:38:29.223051] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:19:17.431 [2024-11-20 13:38:29.223178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70785 ] 00:19:17.690 [2024-11-20 13:38:29.402765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.690 [2024-11-20 13:38:29.517496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.949 Running I/O for 5 seconds... 00:19:20.263 38173.00 IOPS, 149.11 MiB/s [2024-11-20T13:38:33.157Z] 36177.50 IOPS, 141.32 MiB/s [2024-11-20T13:38:34.090Z] 37444.33 IOPS, 146.27 MiB/s [2024-11-20T13:38:35.026Z] 38155.50 IOPS, 149.04 MiB/s 00:19:23.069 Latency(us) 00:19:23.069 [2024-11-20T13:38:35.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.069 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:23.069 xnvme_bdev : 5.00 38400.61 150.00 0.00 0.00 1662.75 160.39 5316.58 00:19:23.069 [2024-11-20T13:38:35.026Z] =================================================================================================================== 00:19:23.069 [2024-11-20T13:38:35.026Z] Total : 38400.61 150.00 0.00 0.00 1662.75 160.39 5316.58 00:19:24.464 00:19:24.464 real 0m13.862s 00:19:24.464 user 0m4.944s 00:19:24.464 sys 0m5.935s 00:19:24.464 13:38:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.464 13:38:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:24.464 ************************************ 00:19:24.464 END TEST xnvme_bdevperf 00:19:24.464 ************************************ 00:19:24.464 13:38:36 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:24.464 13:38:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:24.464 13:38:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.464 13:38:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:24.464 ************************************ 00:19:24.464 START TEST xnvme_fio_plugin 00:19:24.464 ************************************ 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:24.464 13:38:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:24.464 { 00:19:24.464 "subsystems": [ 00:19:24.464 { 00:19:24.464 "subsystem": "bdev", 00:19:24.464 "config": [ 00:19:24.464 { 00:19:24.464 "params": { 00:19:24.464 "io_mechanism": "libaio", 00:19:24.464 "conserve_cpu": false, 00:19:24.464 "filename": "/dev/nvme0n1", 00:19:24.464 "name": "xnvme_bdev" 00:19:24.464 }, 00:19:24.464 "method": "bdev_xnvme_create" 00:19:24.464 }, 00:19:24.464 { 00:19:24.464 "method": "bdev_wait_for_examine" 00:19:24.464 } 00:19:24.464 ] 00:19:24.464 } 00:19:24.464 ] 00:19:24.464 } 00:19:24.464 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:24.464 fio-3.35 00:19:24.464 Starting 1 thread 00:19:31.035 00:19:31.035 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70910: Wed Nov 20 13:38:42 2024 00:19:31.035 read: IOPS=42.4k, BW=165MiB/s (173MB/s)(827MiB/5001msec) 00:19:31.035 slat (usec): min=4, max=462, avg=20.79, stdev=22.32 00:19:31.035 clat (usec): min=67, max=5633, avg=879.03, stdev=550.23 00:19:31.035 lat (usec): min=88, max=5687, avg=899.83, stdev=554.28 00:19:31.035 clat percentiles (usec): 00:19:31.035 | 1.00th=[ 174], 5.00th=[ 251], 10.00th=[ 322], 20.00th=[ 449], 00:19:31.035 | 30.00th=[ 562], 40.00th=[ 676], 50.00th=[ 791], 60.00th=[ 906], 00:19:31.035 | 70.00th=[ 1037], 80.00th=[ 1205], 90.00th=[ 1467], 95.00th=[ 1778], 00:19:31.035 | 99.00th=[ 3064], 99.50th=[ 3720], 99.90th=[ 4555], 99.95th=[ 4752], 00:19:31.035 | 99.99th=[ 5276] 00:19:31.035 bw ( KiB/s): min=144136, max=189936, per=99.88%, avg=169232.89, stdev=14651.07, samples=9 00:19:31.035 iops : min=36034, max=47484, avg=42308.22, stdev=3662.77, samples=9 00:19:31.035 lat (usec) : 100=0.03%, 250=4.96%, 500=19.35%, 750=22.23%, 1000=20.79% 00:19:31.035 lat (msec) : 2=29.22%, 4=3.08%, 10=0.34% 00:19:31.035 cpu : usr=24.36%, sys=51.90%, ctx=138, majf=0, minf=764 00:19:31.035 IO depths : 1=0.1%, 2=1.2%, 4=4.4%, 8=11.4%, 16=26.1%, 32=55.1%, >=64=1.8% 00:19:31.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.035 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:19:31.035 issued rwts: total=211830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.035 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.035 00:19:31.035 Run status group 0 (all jobs): 00:19:31.035 READ: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=827MiB (868MB), run=5001-5001msec 00:19:31.657 ----------------------------------------------------- 00:19:31.657 Suppressions used: 00:19:31.657 count bytes template 00:19:31.657 1 11 /usr/src/fio/parse.c 00:19:31.657 1 8 libtcmalloc_minimal.so 00:19:31.657 1 904 libcrypto.so 00:19:31.657 ----------------------------------------------------- 00:19:31.657 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:31.657 13:38:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:31.657 { 00:19:31.657 "subsystems": [ 00:19:31.657 { 00:19:31.657 "subsystem": "bdev", 00:19:31.657 "config": [ 00:19:31.657 { 00:19:31.657 "params": { 00:19:31.657 "io_mechanism": "libaio", 00:19:31.657 "conserve_cpu": false, 00:19:31.657 "filename": "/dev/nvme0n1", 00:19:31.657 "name": "xnvme_bdev" 00:19:31.657 }, 00:19:31.657 "method": "bdev_xnvme_create" 00:19:31.657 }, 00:19:31.657 { 00:19:31.657 "method": "bdev_wait_for_examine" 00:19:31.657 } 00:19:31.657 ] 00:19:31.657 } 00:19:31.657 ] 00:19:31.657 } 00:19:31.913 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:31.913 fio-3.35 00:19:31.913 Starting 1 thread 00:19:38.475 00:19:38.475 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71011: Wed Nov 20 13:38:49 2024 00:19:38.475 write: IOPS=39.6k, BW=155MiB/s (162MB/s)(774MiB/5001msec); 0 zone resets 00:19:38.475 slat (usec): min=4, max=2649, avg=21.64, stdev=28.29 00:19:38.475 clat (usec): min=15, max=6901, avg=967.05, stdev=615.59 00:19:38.475 lat (usec): min=73, max=6925, avg=988.69, stdev=619.40 00:19:38.475 clat percentiles (usec): 00:19:38.475 | 1.00th=[ 188], 5.00th=[ 277], 10.00th=[ 359], 20.00th=[ 494], 00:19:38.475 | 30.00th=[ 619], 40.00th=[ 742], 50.00th=[ 857], 60.00th=[ 979], 00:19:38.475 | 70.00th=[ 1123], 80.00th=[ 1303], 90.00th=[ 1647], 95.00th=[ 2040], 00:19:38.475 | 99.00th=[ 3490], 99.50th=[ 4047], 99.90th=[ 4752], 99.95th=[ 5014], 00:19:38.475 | 99.99th=[ 5407] 00:19:38.475 bw ( KiB/s): min=136487, max=174248, per=100.00%, avg=159352.00, stdev=12659.02, samples=9 00:19:38.475 iops : min=34121, max=43562, avg=39837.89, stdev=3164.90, samples=9 00:19:38.475 lat (usec) : 20=0.01%, 50=0.01%, 100=0.05%, 250=3.48%, 500=16.88% 00:19:38.475 lat (usec) : 750=20.62%, 1000=20.56% 00:19:38.475 lat (msec) : 2=33.09%, 4=4.77%, 10=0.54% 00:19:38.475 cpu : usr=26.82%, sys=51.12%, ctx=187, majf=0, minf=765 00:19:38.475 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=10.5%, 16=25.4%, 32=57.2%, >=64=1.9% 00:19:38.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.475 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:19:38.475 issued rwts: total=0,198143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.475 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:38.475 00:19:38.475 Run status group 0 (all jobs): 00:19:38.475 WRITE: bw=155MiB/s (162MB/s), 155MiB/s-155MiB/s (162MB/s-162MB/s), io=774MiB (812MB), run=5001-5001msec 00:19:39.043 ----------------------------------------------------- 00:19:39.043 Suppressions used: 00:19:39.043 count bytes template 00:19:39.043 1 11 /usr/src/fio/parse.c 00:19:39.043 1 8 libtcmalloc_minimal.so 00:19:39.043 1 904 libcrypto.so 00:19:39.043 ----------------------------------------------------- 00:19:39.043 00:19:39.043 00:19:39.043 real 0m14.904s 00:19:39.043 user 0m6.369s 00:19:39.043 sys 0m5.913s 00:19:39.043 13:38:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.043 13:38:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:39.043 ************************************ 00:19:39.043 END TEST xnvme_fio_plugin 00:19:39.043 ************************************ 00:19:39.302 13:38:51 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:39.302 13:38:51 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:19:39.302 13:38:51 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:19:39.302 13:38:51 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:39.302 13:38:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:39.302 13:38:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.302 13:38:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:39.302 ************************************ 00:19:39.302 START TEST xnvme_rpc 00:19:39.302 ************************************ 00:19:39.302 13:38:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:39.302 13:38:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:39.302 13:38:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:39.302 13:38:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:39.302 13:38:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:39.302 13:38:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71097 00:19:39.302 13:38:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:39.302 13:38:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71097 00:19:39.302 13:38:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71097 ']' 00:19:39.302 13:38:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.302 13:38:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.302 13:38:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.302 13:38:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.302 13:38:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:39.302 [2024-11-20 13:38:51.175488] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:19:39.302 [2024-11-20 13:38:51.175631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71097 ] 00:19:39.561 [2024-11-20 13:38:51.355859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.561 [2024-11-20 13:38:51.478123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.497 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.497 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:40.497 13:38:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:19:40.497 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.497 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.497 xnvme_bdev 00:19:40.497 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.497 13:38:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:40.497 13:38:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:40.497 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.497 13:38:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:40.497 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.497 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.497 13:38:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:40.497 13:38:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:40.758 13:38:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:40.758 13:38:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:40.758 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.758 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.758 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.758 13:38:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:19:40.758 13:38:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:40.758 13:38:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:40.758 13:38:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:40.758 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.758 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.758 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.758 13:38:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:19:40.758 13:38:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71097 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71097 ']' 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71097 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71097 00:19:40.759 killing process with pid 71097 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71097' 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71097 00:19:40.759 13:38:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71097 00:19:43.293 ************************************ 00:19:43.293 END TEST xnvme_rpc 00:19:43.293 ************************************ 00:19:43.293 00:19:43.293 real 0m4.033s 00:19:43.293 user 0m4.034s 00:19:43.293 sys 0m0.573s 00:19:43.293 13:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:43.293 13:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:43.293 13:38:55 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:43.293 13:38:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:43.293 13:38:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:43.293 13:38:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:43.293 ************************************ 00:19:43.293 START TEST xnvme_bdevperf 00:19:43.293 ************************************ 00:19:43.293 13:38:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:43.293 13:38:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:43.293 13:38:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:19:43.293 13:38:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:43.293 13:38:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:43.293 13:38:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:43.293 13:38:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:43.293 13:38:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:43.293 { 00:19:43.293 "subsystems": [ 00:19:43.293 { 00:19:43.293 "subsystem": "bdev", 00:19:43.293 "config": [ 00:19:43.293 { 00:19:43.293 "params": { 00:19:43.293 "io_mechanism": "libaio", 00:19:43.293 "conserve_cpu": true, 00:19:43.294 "filename": "/dev/nvme0n1", 00:19:43.294 "name": "xnvme_bdev" 00:19:43.294 }, 00:19:43.294 "method": "bdev_xnvme_create" 00:19:43.294 }, 00:19:43.294 { 00:19:43.294 "method": "bdev_wait_for_examine" 00:19:43.294 } 00:19:43.294 ] 00:19:43.294 } 00:19:43.294 ] 00:19:43.294 } 00:19:43.552 [2024-11-20 13:38:55.254385] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:19:43.552 [2024-11-20 13:38:55.254512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71177 ] 00:19:43.552 [2024-11-20 13:38:55.421841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.811 [2024-11-20 13:38:55.542107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.136 Running I/O for 5 seconds... 00:19:46.028 36648.00 IOPS, 143.16 MiB/s [2024-11-20T13:38:58.920Z] 37523.00 IOPS, 146.57 MiB/s [2024-11-20T13:39:00.307Z] 37682.67 IOPS, 147.20 MiB/s [2024-11-20T13:39:01.255Z] 37854.50 IOPS, 147.87 MiB/s 00:19:49.298 Latency(us) 00:19:49.298 [2024-11-20T13:39:01.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.298 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:49.298 xnvme_bdev : 5.00 37896.49 148.03 0.00 0.00 1685.01 175.19 9843.56 00:19:49.298 [2024-11-20T13:39:01.255Z] =================================================================================================================== 00:19:49.298 [2024-11-20T13:39:01.255Z] Total : 37896.49 148.03 0.00 0.00 1685.01 175.19 9843.56 00:19:50.233 13:39:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:50.234 13:39:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:50.234 13:39:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:50.234 13:39:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:50.234 13:39:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:50.234 { 00:19:50.234 "subsystems": [ 00:19:50.234 { 00:19:50.234 "subsystem": "bdev", 00:19:50.234 "config": [ 00:19:50.234 { 00:19:50.234 "params": { 00:19:50.234 "io_mechanism": "libaio", 00:19:50.234 "conserve_cpu": true, 00:19:50.234 "filename": "/dev/nvme0n1", 00:19:50.234 "name": "xnvme_bdev" 00:19:50.234 }, 00:19:50.234 "method": "bdev_xnvme_create" 00:19:50.234 }, 00:19:50.234 { 00:19:50.234 "method": "bdev_wait_for_examine" 00:19:50.234 } 00:19:50.234 ] 00:19:50.234 } 00:19:50.234 ] 00:19:50.234 } 00:19:50.492 [2024-11-20 13:39:02.212037] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:19:50.492 [2024-11-20 13:39:02.212179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71263 ] 00:19:50.492 [2024-11-20 13:39:02.393393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.751 [2024-11-20 13:39:02.518796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.008 Running I/O for 5 seconds... 00:19:53.358 37625.00 IOPS, 146.97 MiB/s [2024-11-20T13:39:06.252Z] 37789.00 IOPS, 147.61 MiB/s [2024-11-20T13:39:07.188Z] 37926.33 IOPS, 148.15 MiB/s [2024-11-20T13:39:08.141Z] 38193.50 IOPS, 149.19 MiB/s [2024-11-20T13:39:08.141Z] 37626.60 IOPS, 146.98 MiB/s 00:19:56.184 Latency(us) 00:19:56.184 [2024-11-20T13:39:08.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.184 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:56.184 xnvme_bdev : 5.00 37593.29 146.85 0.00 0.00 1697.80 62.92 8790.77 00:19:56.184 [2024-11-20T13:39:08.141Z] =================================================================================================================== 00:19:56.184 [2024-11-20T13:39:08.141Z] Total : 37593.29 146.85 0.00 0.00 1697.80 62.92 8790.77 00:19:57.563 00:19:57.563 real 0m13.945s 00:19:57.563 user 0m5.423s 00:19:57.563 sys 0m5.858s 00:19:57.563 13:39:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:57.563 13:39:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:57.563 ************************************ 00:19:57.563 END TEST xnvme_bdevperf 00:19:57.563 ************************************ 00:19:57.563 13:39:09 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:57.563 13:39:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:57.563 13:39:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.563 13:39:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:57.563 ************************************ 00:19:57.563 START TEST xnvme_fio_plugin 00:19:57.563 ************************************ 00:19:57.563 13:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:57.563 13:39:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:57.563 13:39:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:19:57.563 13:39:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:57.563 13:39:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:57.563 13:39:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:57.563 13:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:57.563 13:39:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:57.563 13:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:57.563 13:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:57.563 13:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:57.563 13:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:57.563 13:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:57.563 13:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:57.563 13:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:57.563 13:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:57.564 13:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:57.564 13:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:57.564 13:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:57.564 13:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:57.564 13:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:57.564 13:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:57.564 13:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:57.564 13:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:57.564 { 00:19:57.564 "subsystems": [ 00:19:57.564 { 00:19:57.564 "subsystem": "bdev", 00:19:57.564 "config": [ 00:19:57.564 { 00:19:57.564 "params": { 00:19:57.564 "io_mechanism": "libaio", 00:19:57.564 "conserve_cpu": true, 00:19:57.564 "filename": "/dev/nvme0n1", 00:19:57.564 "name": "xnvme_bdev" 00:19:57.564 }, 00:19:57.564 "method": "bdev_xnvme_create" 00:19:57.564 }, 00:19:57.564 { 00:19:57.564 "method": "bdev_wait_for_examine" 00:19:57.564 } 00:19:57.564 ] 00:19:57.564 } 00:19:57.564 ] 00:19:57.564 } 00:19:57.564 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:57.564 fio-3.35 00:19:57.564 Starting 1 thread 00:20:04.129 00:20:04.129 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71388: Wed Nov 20 13:39:15 2024 00:20:04.129 read: IOPS=40.0k, BW=156MiB/s (164MB/s)(781MiB/5001msec) 00:20:04.129 slat (usec): min=4, max=2079, avg=21.87, stdev=26.55 00:20:04.129 clat (usec): min=90, max=6295, avg=929.10, stdev=598.22 00:20:04.129 lat (usec): min=139, max=6377, avg=950.97, stdev=603.24 00:20:04.129 clat percentiles (usec): 00:20:04.129 | 1.00th=[ 180], 5.00th=[ 258], 10.00th=[ 334], 20.00th=[ 474], 00:20:04.129 | 30.00th=[ 603], 40.00th=[ 725], 50.00th=[ 840], 60.00th=[ 955], 00:20:04.129 | 70.00th=[ 1090], 80.00th=[ 1237], 90.00th=[ 1500], 95.00th=[ 1893], 00:20:04.129 | 99.00th=[ 3458], 99.50th=[ 4047], 99.90th=[ 4883], 99.95th=[ 5145], 00:20:04.129 | 99.99th=[ 5735] 00:20:04.129 bw ( KiB/s): min=122264, max=184744, per=99.56%, avg=159152.89, stdev=17150.98, samples=9 00:20:04.129 iops : min=30566, max=46186, avg=39788.44, stdev=4287.65, samples=9 00:20:04.129 lat (usec) : 100=0.03%, 250=4.52%, 500=17.53%, 750=20.33%, 1000=21.07% 00:20:04.129 lat (msec) : 2=32.09%, 4=3.90%, 10=0.52% 00:20:04.129 cpu : usr=25.22%, sys=52.48%, ctx=83, majf=0, minf=764 00:20:04.129 IO depths : 1=0.1%, 2=1.2%, 4=4.4%, 8=11.3%, 16=25.8%, 32=55.4%, >=64=1.8% 00:20:04.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.129 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:20:04.129 issued rwts: total=199852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:04.129 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:04.129 00:20:04.129 Run status group 0 (all jobs): 00:20:04.129 READ: bw=156MiB/s (164MB/s), 156MiB/s-156MiB/s (164MB/s-164MB/s), io=781MiB (819MB), run=5001-5001msec 00:20:04.698 ----------------------------------------------------- 00:20:04.698 Suppressions used: 00:20:04.698 count bytes template 00:20:04.698 1 11 /usr/src/fio/parse.c 00:20:04.698 1 8 libtcmalloc_minimal.so 00:20:04.698 1 904 libcrypto.so 00:20:04.698 ----------------------------------------------------- 00:20:04.698 00:20:04.956 13:39:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:04.956 13:39:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:04.956 13:39:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:04.956 13:39:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:04.956 13:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:04.956 13:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:04.956 13:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:04.956 13:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:04.956 13:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:04.956 13:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:04.956 13:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:04.956 13:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:04.956 13:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:04.956 13:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:04.956 13:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:04.956 13:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:04.956 13:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:04.957 13:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:04.957 13:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:04.957 13:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:04.957 13:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:04.957 { 00:20:04.957 "subsystems": [ 00:20:04.957 { 00:20:04.957 "subsystem": "bdev", 00:20:04.957 "config": [ 00:20:04.957 { 00:20:04.957 "params": { 00:20:04.957 "io_mechanism": "libaio", 00:20:04.957 "conserve_cpu": true, 00:20:04.957 "filename": "/dev/nvme0n1", 00:20:04.957 "name": "xnvme_bdev" 00:20:04.957 }, 00:20:04.957 "method": "bdev_xnvme_create" 00:20:04.957 }, 00:20:04.957 { 00:20:04.957 "method": "bdev_wait_for_examine" 00:20:04.957 } 00:20:04.957 ] 00:20:04.957 } 00:20:04.957 ] 00:20:04.957 } 00:20:05.215 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:05.215 fio-3.35 00:20:05.215 Starting 1 thread 00:20:11.773 00:20:11.773 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71481: Wed Nov 20 13:39:22 2024 00:20:11.773 write: IOPS=39.0k, BW=152MiB/s (160MB/s)(761MiB/5001msec); 0 zone resets 00:20:11.773 slat (usec): min=4, max=755, avg=21.75, stdev=28.86 00:20:11.773 clat (usec): min=19, max=7570, avg=995.62, stdev=649.33 00:20:11.773 lat (usec): min=52, max=7661, avg=1017.37, stdev=653.94 00:20:11.773 clat percentiles (usec): 00:20:11.773 | 1.00th=[ 184], 5.00th=[ 281], 10.00th=[ 367], 20.00th=[ 515], 00:20:11.773 | 30.00th=[ 644], 40.00th=[ 758], 50.00th=[ 865], 60.00th=[ 996], 00:20:11.773 | 70.00th=[ 1139], 80.00th=[ 1336], 90.00th=[ 1680], 95.00th=[ 2114], 00:20:11.773 | 99.00th=[ 3687], 99.50th=[ 4228], 99.90th=[ 5080], 99.95th=[ 5604], 00:20:11.773 | 99.99th=[ 6325] 00:20:11.773 bw ( KiB/s): min=146592, max=171096, per=100.00%, avg=157557.33, stdev=9148.03, samples=9 00:20:11.773 iops : min=36648, max=42774, avg=39389.33, stdev=2287.01, samples=9 00:20:11.773 lat (usec) : 20=0.01%, 50=0.01%, 100=0.06%, 250=3.50%, 500=15.35% 00:20:11.773 lat (usec) : 750=20.45%, 1000=21.16% 00:20:11.773 lat (msec) : 2=33.50%, 4=5.31%, 10=0.67% 00:20:11.773 cpu : usr=27.94%, sys=50.62%, ctx=50, majf=0, minf=765 00:20:11.773 IO depths : 1=0.1%, 2=1.1%, 4=4.0%, 8=10.3%, 16=24.7%, 32=57.7%, >=64=2.0% 00:20:11.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.773 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:20:11.773 issued rwts: total=0,194796,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.773 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:11.773 00:20:11.773 Run status group 0 (all jobs): 00:20:11.773 WRITE: bw=152MiB/s (160MB/s), 152MiB/s-152MiB/s (160MB/s-160MB/s), io=761MiB (798MB), run=5001-5001msec 00:20:12.372 ----------------------------------------------------- 00:20:12.372 Suppressions used: 00:20:12.372 count bytes template 00:20:12.372 1 11 /usr/src/fio/parse.c 00:20:12.372 1 8 libtcmalloc_minimal.so 00:20:12.372 1 904 libcrypto.so 00:20:12.372 ----------------------------------------------------- 00:20:12.372 00:20:12.372 00:20:12.372 real 0m14.985s 00:20:12.372 user 0m6.536s 00:20:12.372 sys 0m5.919s 00:20:12.372 13:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.372 13:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:12.372 ************************************ 00:20:12.372 END TEST xnvme_fio_plugin 00:20:12.372 ************************************ 00:20:12.372 13:39:24 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:20:12.372 13:39:24 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:20:12.372 13:39:24 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:20:12.372 13:39:24 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:20:12.372 13:39:24 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:20:12.372 13:39:24 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:12.372 13:39:24 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:20:12.372 13:39:24 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:20:12.372 13:39:24 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:12.372 13:39:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:12.372 13:39:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:12.372 13:39:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:12.372 ************************************ 00:20:12.372 START TEST xnvme_rpc 00:20:12.372 ************************************ 00:20:12.372 13:39:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:12.372 13:39:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:12.372 13:39:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:12.372 13:39:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:12.372 13:39:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:12.372 13:39:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:12.372 13:39:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71575 00:20:12.372 13:39:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71575 00:20:12.372 13:39:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71575 ']' 00:20:12.372 13:39:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.372 13:39:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.372 13:39:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.372 13:39:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.372 13:39:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.636 [2024-11-20 13:39:24.314646] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:20:12.636 [2024-11-20 13:39:24.315018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71575 ] 00:20:12.636 [2024-11-20 13:39:24.485059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.896 [2024-11-20 13:39:24.607323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:13.831 xnvme_bdev 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.831 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:13.832 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.832 13:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71575 00:20:13.832 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71575 ']' 00:20:13.832 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71575 00:20:13.832 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:13.832 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.832 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71575 00:20:13.832 killing process with pid 71575 00:20:13.832 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:13.832 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:13.832 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71575' 00:20:13.832 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71575 00:20:13.832 13:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71575 00:20:16.364 00:20:16.364 real 0m4.015s 00:20:16.364 user 0m4.142s 00:20:16.364 sys 0m0.531s 00:20:16.364 13:39:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.364 ************************************ 00:20:16.364 END TEST xnvme_rpc 00:20:16.364 ************************************ 00:20:16.364 13:39:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:16.364 13:39:28 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:16.364 13:39:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:16.364 13:39:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.364 13:39:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:16.364 ************************************ 00:20:16.364 START TEST xnvme_bdevperf 00:20:16.364 ************************************ 00:20:16.364 13:39:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:16.364 13:39:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:16.364 13:39:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:20:16.364 13:39:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:16.364 13:39:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:16.364 13:39:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:16.364 13:39:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:16.364 13:39:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:16.621 { 00:20:16.621 "subsystems": [ 00:20:16.621 { 00:20:16.621 "subsystem": "bdev", 00:20:16.621 "config": [ 00:20:16.621 { 00:20:16.621 "params": { 00:20:16.621 "io_mechanism": "io_uring", 00:20:16.621 "conserve_cpu": false, 00:20:16.621 "filename": "/dev/nvme0n1", 00:20:16.621 "name": "xnvme_bdev" 00:20:16.621 }, 00:20:16.621 "method": "bdev_xnvme_create" 00:20:16.621 }, 00:20:16.621 { 00:20:16.621 "method": "bdev_wait_for_examine" 00:20:16.621 } 00:20:16.621 ] 00:20:16.621 } 00:20:16.621 ] 00:20:16.621 } 00:20:16.621 [2024-11-20 13:39:28.400225] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:20:16.621 [2024-11-20 13:39:28.400358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71660 ] 00:20:16.879 [2024-11-20 13:39:28.582415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.879 [2024-11-20 13:39:28.704653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.135 Running I/O for 5 seconds... 00:20:19.455 29583.00 IOPS, 115.56 MiB/s [2024-11-20T13:39:32.378Z] 30975.00 IOPS, 121.00 MiB/s [2024-11-20T13:39:33.333Z] 28904.00 IOPS, 112.91 MiB/s [2024-11-20T13:39:34.270Z] 27652.25 IOPS, 108.02 MiB/s 00:20:22.313 Latency(us) 00:20:22.313 [2024-11-20T13:39:34.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.313 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:22.313 xnvme_bdev : 5.00 26931.78 105.20 0.00 0.00 2369.02 779.72 8159.10 00:20:22.313 [2024-11-20T13:39:34.270Z] =================================================================================================================== 00:20:22.313 [2024-11-20T13:39:34.270Z] Total : 26931.78 105.20 0.00 0.00 2369.02 779.72 8159.10 00:20:23.706 13:39:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:23.706 13:39:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:23.706 13:39:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:23.706 13:39:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:23.706 13:39:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:23.706 { 00:20:23.706 "subsystems": [ 00:20:23.706 { 00:20:23.706 "subsystem": "bdev", 00:20:23.706 "config": [ 00:20:23.706 { 00:20:23.706 "params": { 00:20:23.706 "io_mechanism": "io_uring", 00:20:23.706 "conserve_cpu": false, 00:20:23.706 "filename": "/dev/nvme0n1", 00:20:23.706 "name": "xnvme_bdev" 00:20:23.706 }, 00:20:23.706 "method": "bdev_xnvme_create" 00:20:23.706 }, 00:20:23.706 { 00:20:23.706 "method": "bdev_wait_for_examine" 00:20:23.706 } 00:20:23.706 ] 00:20:23.706 } 00:20:23.706 ] 00:20:23.706 } 00:20:23.706 [2024-11-20 13:39:35.507093] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:20:23.706 [2024-11-20 13:39:35.507237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71735 ] 00:20:23.975 [2024-11-20 13:39:35.692914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.975 [2024-11-20 13:39:35.846343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.571 Running I/O for 5 seconds... 00:20:26.447 28737.00 IOPS, 112.25 MiB/s [2024-11-20T13:39:39.340Z] 26269.00 IOPS, 102.61 MiB/s [2024-11-20T13:39:40.718Z] 26450.00 IOPS, 103.32 MiB/s [2024-11-20T13:39:41.284Z] 26834.00 IOPS, 104.82 MiB/s [2024-11-20T13:39:41.544Z] 27227.00 IOPS, 106.36 MiB/s 00:20:29.587 Latency(us) 00:20:29.587 [2024-11-20T13:39:41.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.587 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:20:29.587 xnvme_bdev : 5.01 27187.80 106.20 0.00 0.00 2346.51 166.97 7422.15 00:20:29.587 [2024-11-20T13:39:41.544Z] =================================================================================================================== 00:20:29.587 [2024-11-20T13:39:41.544Z] Total : 27187.80 106.20 0.00 0.00 2346.51 166.97 7422.15 00:20:30.531 00:20:30.531 real 0m14.140s 00:20:30.531 user 0m7.278s 00:20:30.531 sys 0m6.618s 00:20:30.531 13:39:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:30.531 13:39:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:30.531 ************************************ 00:20:30.531 END TEST xnvme_bdevperf 00:20:30.531 ************************************ 00:20:30.790 13:39:42 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:20:30.790 13:39:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:30.790 13:39:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:30.790 13:39:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:30.790 ************************************ 00:20:30.790 START TEST xnvme_fio_plugin 00:20:30.790 ************************************ 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:30.790 13:39:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:30.790 { 00:20:30.790 "subsystems": [ 00:20:30.790 { 00:20:30.790 "subsystem": "bdev", 00:20:30.790 "config": [ 00:20:30.790 { 00:20:30.790 "params": { 00:20:30.790 "io_mechanism": "io_uring", 00:20:30.790 "conserve_cpu": false, 00:20:30.790 "filename": "/dev/nvme0n1", 00:20:30.790 "name": "xnvme_bdev" 00:20:30.790 }, 00:20:30.790 "method": "bdev_xnvme_create" 00:20:30.790 }, 00:20:30.790 { 00:20:30.790 "method": "bdev_wait_for_examine" 00:20:30.790 } 00:20:30.790 ] 00:20:30.790 } 00:20:30.790 ] 00:20:30.790 } 00:20:31.049 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:31.049 fio-3.35 00:20:31.049 Starting 1 thread 00:20:37.606 00:20:37.606 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71860: Wed Nov 20 13:39:48 2024 00:20:37.606 read: IOPS=29.6k, BW=116MiB/s (121MB/s)(578MiB/5002msec) 00:20:37.606 slat (usec): min=2, max=1120, avg= 6.37, stdev= 4.15 00:20:37.606 clat (usec): min=959, max=4489, avg=1910.31, stdev=393.95 00:20:37.606 lat (usec): min=964, max=4517, avg=1916.68, stdev=395.28 00:20:37.606 clat percentiles (usec): 00:20:37.606 | 1.00th=[ 1188], 5.00th=[ 1352], 10.00th=[ 1450], 20.00th=[ 1565], 00:20:37.606 | 30.00th=[ 1663], 40.00th=[ 1762], 50.00th=[ 1844], 60.00th=[ 1958], 00:20:37.606 | 70.00th=[ 2114], 80.00th=[ 2278], 90.00th=[ 2474], 95.00th=[ 2606], 00:20:37.606 | 99.00th=[ 2835], 99.50th=[ 2966], 99.90th=[ 3687], 99.95th=[ 3949], 00:20:37.606 | 99.99th=[ 4359] 00:20:37.606 bw ( KiB/s): min=94696, max=137728, per=99.09%, avg=117304.00, stdev=12669.12, samples=9 00:20:37.606 iops : min=23672, max=34432, avg=29326.00, stdev=3167.93, samples=9 00:20:37.606 lat (usec) : 1000=0.01% 00:20:37.606 lat (msec) : 2=63.16%, 4=36.79%, 10=0.05% 00:20:37.606 cpu : usr=34.39%, sys=64.49%, ctx=14, majf=0, minf=762 00:20:37.606 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:37.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.606 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:20:37.606 issued rwts: total=148032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.606 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:37.606 00:20:37.606 Run status group 0 (all jobs): 00:20:37.606 READ: bw=116MiB/s (121MB/s), 116MiB/s-116MiB/s (121MB/s-121MB/s), io=578MiB (606MB), run=5002-5002msec 00:20:38.173 ----------------------------------------------------- 00:20:38.173 Suppressions used: 00:20:38.173 count bytes template 00:20:38.173 1 11 /usr/src/fio/parse.c 00:20:38.173 1 8 libtcmalloc_minimal.so 00:20:38.173 1 904 libcrypto.so 00:20:38.173 ----------------------------------------------------- 00:20:38.173 00:20:38.173 13:39:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:38.173 13:39:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:38.173 13:39:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:38.173 13:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:38.173 13:39:49 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:38.173 13:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:38.173 13:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:38.173 13:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:38.173 13:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:38.173 13:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:38.173 13:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:38.174 13:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:38.174 13:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:38.174 13:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:38.174 13:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:38.174 13:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:38.174 13:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:38.174 13:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:38.174 13:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:38.174 13:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:38.174 13:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:38.174 { 00:20:38.174 "subsystems": [ 00:20:38.174 { 00:20:38.174 "subsystem": "bdev", 00:20:38.174 "config": [ 00:20:38.174 { 00:20:38.174 "params": { 00:20:38.174 "io_mechanism": "io_uring", 00:20:38.174 "conserve_cpu": false, 00:20:38.174 "filename": "/dev/nvme0n1", 00:20:38.174 "name": "xnvme_bdev" 00:20:38.174 }, 00:20:38.174 "method": "bdev_xnvme_create" 00:20:38.174 }, 00:20:38.174 { 00:20:38.174 "method": "bdev_wait_for_examine" 00:20:38.174 } 00:20:38.174 ] 00:20:38.174 } 00:20:38.174 ] 00:20:38.174 } 00:20:38.511 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:38.511 fio-3.35 00:20:38.511 Starting 1 thread 00:20:45.070 00:20:45.070 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71959: Wed Nov 20 13:39:56 2024 00:20:45.070 write: IOPS=37.1k, BW=145MiB/s (152MB/s)(725MiB/5001msec); 0 zone resets 00:20:45.070 slat (nsec): min=2493, max=70681, avg=4867.32, stdev=2026.23 00:20:45.070 clat (usec): min=244, max=6474, avg=1531.85, stdev=356.60 00:20:45.070 lat (usec): min=252, max=6480, avg=1536.72, stdev=357.66 00:20:45.070 clat percentiles (usec): 00:20:45.070 | 1.00th=[ 922], 5.00th=[ 1004], 10.00th=[ 1057], 20.00th=[ 1156], 00:20:45.070 | 30.00th=[ 1287], 40.00th=[ 1450], 50.00th=[ 1565], 60.00th=[ 1663], 00:20:45.070 | 70.00th=[ 1745], 80.00th=[ 1827], 90.00th=[ 1958], 95.00th=[ 2057], 00:20:45.070 | 99.00th=[ 2343], 99.50th=[ 2507], 99.90th=[ 2769], 99.95th=[ 2868], 00:20:45.070 | 99.99th=[ 6325] 00:20:45.070 bw ( KiB/s): min=123392, max=210432, per=100.00%, avg=150269.00, stdev=30195.70, samples=9 00:20:45.070 iops : min=30848, max=52608, avg=37567.22, stdev=7548.95, samples=9 00:20:45.070 lat (usec) : 250=0.01%, 1000=4.95% 00:20:45.070 lat (msec) : 2=87.82%, 4=7.20%, 10=0.03% 00:20:45.070 cpu : usr=31.58%, sys=67.46%, ctx=10, majf=0, minf=763 00:20:45.070 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:45.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.070 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:20:45.070 issued rwts: total=0,185725,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:45.070 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:45.070 00:20:45.070 Run status group 0 (all jobs): 00:20:45.070 WRITE: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=725MiB (761MB), run=5001-5001msec 00:20:45.638 ----------------------------------------------------- 00:20:45.638 Suppressions used: 00:20:45.638 count bytes template 00:20:45.638 1 11 /usr/src/fio/parse.c 00:20:45.638 1 8 libtcmalloc_minimal.so 00:20:45.638 1 904 libcrypto.so 00:20:45.638 ----------------------------------------------------- 00:20:45.638 00:20:45.638 00:20:45.638 real 0m14.873s 00:20:45.638 user 0m7.175s 00:20:45.638 sys 0m7.357s 00:20:45.638 ************************************ 00:20:45.638 END TEST xnvme_fio_plugin 00:20:45.638 ************************************ 00:20:45.638 13:39:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:45.638 13:39:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:45.638 13:39:57 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:45.638 13:39:57 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:20:45.638 13:39:57 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:20:45.638 13:39:57 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:45.638 13:39:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:45.638 13:39:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:45.638 13:39:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:45.638 ************************************ 00:20:45.638 START TEST xnvme_rpc 00:20:45.638 ************************************ 00:20:45.638 13:39:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:45.638 13:39:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:45.638 13:39:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:45.638 13:39:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:45.638 13:39:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:45.638 13:39:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72045 00:20:45.639 13:39:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:45.639 13:39:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72045 00:20:45.639 13:39:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72045 ']' 00:20:45.639 13:39:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.639 13:39:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.639 13:39:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.639 13:39:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.639 13:39:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:45.639 [2024-11-20 13:39:57.574828] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:20:45.639 [2024-11-20 13:39:57.575107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72045 ] 00:20:45.897 [2024-11-20 13:39:57.775852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.157 [2024-11-20 13:39:57.898915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.094 xnvme_bdev 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:47.094 13:39:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.094 13:39:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:20:47.094 13:39:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:47.094 13:39:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.094 13:39:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.094 13:39:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.094 13:39:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72045 00:20:47.094 13:39:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72045 ']' 00:20:47.094 13:39:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72045 00:20:47.094 13:39:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:47.094 13:39:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.094 13:39:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72045 00:20:47.363 killing process with pid 72045 00:20:47.363 13:39:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:47.363 13:39:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:47.363 13:39:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72045' 00:20:47.363 13:39:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72045 00:20:47.363 13:39:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72045 00:20:49.895 00:20:49.895 real 0m4.047s 00:20:49.895 user 0m4.148s 00:20:49.895 sys 0m0.573s 00:20:49.895 13:40:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:49.895 ************************************ 00:20:49.895 END TEST xnvme_rpc 00:20:49.895 ************************************ 00:20:49.895 13:40:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:49.895 13:40:01 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:49.895 13:40:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:49.895 13:40:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:49.895 13:40:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:49.895 ************************************ 00:20:49.895 START TEST xnvme_bdevperf 00:20:49.895 ************************************ 00:20:49.895 13:40:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:49.895 13:40:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:49.895 13:40:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:20:49.895 13:40:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:49.895 13:40:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:49.895 13:40:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:49.895 13:40:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:49.895 13:40:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:49.895 { 00:20:49.895 "subsystems": [ 00:20:49.895 { 00:20:49.895 "subsystem": "bdev", 00:20:49.895 "config": [ 00:20:49.895 { 00:20:49.895 "params": { 00:20:49.895 "io_mechanism": "io_uring", 00:20:49.895 "conserve_cpu": true, 00:20:49.895 "filename": "/dev/nvme0n1", 00:20:49.895 "name": "xnvme_bdev" 00:20:49.895 }, 00:20:49.895 "method": "bdev_xnvme_create" 00:20:49.895 }, 00:20:49.895 { 00:20:49.895 "method": "bdev_wait_for_examine" 00:20:49.895 } 00:20:49.895 ] 00:20:49.895 } 00:20:49.895 ] 00:20:49.895 } 00:20:49.895 [2024-11-20 13:40:01.677962] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:20:49.895 [2024-11-20 13:40:01.678095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72130 ] 00:20:50.169 [2024-11-20 13:40:01.859851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.169 [2024-11-20 13:40:01.976041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.428 Running I/O for 5 seconds... 00:20:52.734 42783.00 IOPS, 167.12 MiB/s [2024-11-20T13:40:05.623Z] 40175.00 IOPS, 156.93 MiB/s [2024-11-20T13:40:06.608Z] 37314.67 IOPS, 145.76 MiB/s [2024-11-20T13:40:07.543Z] 33903.75 IOPS, 132.44 MiB/s 00:20:55.586 Latency(us) 00:20:55.586 [2024-11-20T13:40:07.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.586 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:55.586 xnvme_bdev : 5.01 31678.19 123.74 0.00 0.00 2014.34 178.48 8843.41 00:20:55.586 [2024-11-20T13:40:07.543Z] =================================================================================================================== 00:20:55.586 [2024-11-20T13:40:07.543Z] Total : 31678.19 123.74 0.00 0.00 2014.34 178.48 8843.41 00:20:56.963 13:40:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:56.963 13:40:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:56.963 13:40:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:56.963 13:40:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:56.963 13:40:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:56.963 { 00:20:56.963 "subsystems": [ 00:20:56.963 { 00:20:56.963 "subsystem": "bdev", 00:20:56.963 "config": [ 00:20:56.963 { 00:20:56.963 "params": { 00:20:56.963 "io_mechanism": "io_uring", 00:20:56.963 "conserve_cpu": true, 00:20:56.963 "filename": "/dev/nvme0n1", 00:20:56.963 "name": "xnvme_bdev" 00:20:56.963 }, 00:20:56.963 "method": "bdev_xnvme_create" 00:20:56.963 }, 00:20:56.963 { 00:20:56.963 "method": "bdev_wait_for_examine" 00:20:56.963 } 00:20:56.963 ] 00:20:56.963 } 00:20:56.963 ] 00:20:56.963 } 00:20:56.963 [2024-11-20 13:40:08.787518] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:20:56.963 [2024-11-20 13:40:08.787672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72211 ] 00:20:57.221 [2024-11-20 13:40:08.973064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.221 [2024-11-20 13:40:09.129337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.787 Running I/O for 5 seconds... 00:20:59.660 24657.00 IOPS, 96.32 MiB/s [2024-11-20T13:40:12.996Z] 24616.50 IOPS, 96.16 MiB/s [2024-11-20T13:40:13.931Z] 24240.33 IOPS, 94.69 MiB/s [2024-11-20T13:40:14.869Z] 25428.25 IOPS, 99.33 MiB/s 00:21:02.912 Latency(us) 00:21:02.912 [2024-11-20T13:40:14.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.912 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:21:02.912 xnvme_bdev : 5.01 25194.08 98.41 0.00 0.00 2532.02 1019.89 15897.09 00:21:02.912 [2024-11-20T13:40:14.869Z] =================================================================================================================== 00:21:02.912 [2024-11-20T13:40:14.869Z] Total : 25194.08 98.41 0.00 0.00 2532.02 1019.89 15897.09 00:21:04.290 ************************************ 00:21:04.290 END TEST xnvme_bdevperf 00:21:04.290 ************************************ 00:21:04.290 00:21:04.290 real 0m14.272s 00:21:04.290 user 0m8.374s 00:21:04.290 sys 0m5.377s 00:21:04.290 13:40:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.290 13:40:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:04.290 13:40:15 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:21:04.290 13:40:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:04.290 13:40:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.290 13:40:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:04.290 ************************************ 00:21:04.290 START TEST xnvme_fio_plugin 00:21:04.290 ************************************ 00:21:04.290 13:40:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:21:04.290 13:40:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:21:04.290 13:40:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:21:04.290 13:40:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:04.290 13:40:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:04.290 13:40:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:04.291 13:40:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:04.291 13:40:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:04.291 13:40:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:04.291 13:40:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:04.291 13:40:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:04.291 13:40:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:04.291 13:40:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:04.291 13:40:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:04.291 13:40:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:04.291 13:40:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.291 13:40:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:04.291 13:40:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:04.291 13:40:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:04.291 13:40:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:04.291 13:40:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:04.291 13:40:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:04.291 13:40:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:04.291 { 00:21:04.291 "subsystems": [ 00:21:04.291 { 00:21:04.291 "subsystem": "bdev", 00:21:04.291 "config": [ 00:21:04.291 { 00:21:04.291 "params": { 00:21:04.291 "io_mechanism": "io_uring", 00:21:04.291 "conserve_cpu": true, 00:21:04.291 "filename": "/dev/nvme0n1", 00:21:04.291 "name": "xnvme_bdev" 00:21:04.291 }, 00:21:04.291 "method": "bdev_xnvme_create" 00:21:04.291 }, 00:21:04.291 { 00:21:04.291 "method": "bdev_wait_for_examine" 00:21:04.291 } 00:21:04.291 ] 00:21:04.291 } 00:21:04.291 ] 00:21:04.291 } 00:21:04.291 13:40:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:04.291 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:04.291 fio-3.35 00:21:04.291 Starting 1 thread 00:21:10.856 00:21:10.856 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72344: Wed Nov 20 13:40:22 2024 00:21:10.856 read: IOPS=34.3k, BW=134MiB/s (140MB/s)(670MiB/5002msec) 00:21:10.856 slat (usec): min=2, max=120, avg= 4.88, stdev= 2.29 00:21:10.856 clat (usec): min=783, max=7207, avg=1667.42, stdev=436.33 00:21:10.856 lat (usec): min=786, max=7217, avg=1672.31, stdev=437.41 00:21:10.856 clat percentiles (usec): 00:21:10.856 | 1.00th=[ 1012], 5.00th=[ 1139], 10.00th=[ 1221], 20.00th=[ 1336], 00:21:10.856 | 30.00th=[ 1418], 40.00th=[ 1500], 50.00th=[ 1582], 60.00th=[ 1680], 00:21:10.856 | 70.00th=[ 1827], 80.00th=[ 1991], 90.00th=[ 2212], 95.00th=[ 2376], 00:21:10.856 | 99.00th=[ 2802], 99.50th=[ 3228], 99.90th=[ 5342], 99.95th=[ 5997], 00:21:10.856 | 99.99th=[ 7046] 00:21:10.856 bw ( KiB/s): min=115200, max=162816, per=99.79%, avg=136816.89, stdev=15221.85, samples=9 00:21:10.856 iops : min=28800, max=40704, avg=34204.22, stdev=3805.46, samples=9 00:21:10.856 lat (usec) : 1000=0.86% 00:21:10.856 lat (msec) : 2=79.44%, 4=19.38%, 10=0.32% 00:21:10.856 cpu : usr=48.44%, sys=47.88%, ctx=14, majf=0, minf=762 00:21:10.856 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:21:10.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:10.856 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:21:10.856 issued rwts: total=171455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:10.856 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:10.856 00:21:10.856 Run status group 0 (all jobs): 00:21:10.856 READ: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=670MiB (702MB), run=5002-5002msec 00:21:11.792 ----------------------------------------------------- 00:21:11.792 Suppressions used: 00:21:11.792 count bytes template 00:21:11.792 1 11 /usr/src/fio/parse.c 00:21:11.792 1 8 libtcmalloc_minimal.so 00:21:11.792 1 904 libcrypto.so 00:21:11.792 ----------------------------------------------------- 00:21:11.792 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:11.792 13:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:11.792 { 00:21:11.792 "subsystems": [ 00:21:11.792 { 00:21:11.792 "subsystem": "bdev", 00:21:11.792 "config": [ 00:21:11.792 { 00:21:11.792 "params": { 00:21:11.792 "io_mechanism": "io_uring", 00:21:11.792 "conserve_cpu": true, 00:21:11.793 "filename": "/dev/nvme0n1", 00:21:11.793 "name": "xnvme_bdev" 00:21:11.793 }, 00:21:11.793 "method": "bdev_xnvme_create" 00:21:11.793 }, 00:21:11.793 { 00:21:11.793 "method": "bdev_wait_for_examine" 00:21:11.793 } 00:21:11.793 ] 00:21:11.793 } 00:21:11.793 ] 00:21:11.793 } 00:21:12.051 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:12.051 fio-3.35 00:21:12.051 Starting 1 thread 00:21:18.660 00:21:18.660 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72439: Wed Nov 20 13:40:29 2024 00:21:18.660 write: IOPS=31.0k, BW=121MiB/s (127MB/s)(606MiB/5002msec); 0 zone resets 00:21:18.660 slat (nsec): min=2919, max=75255, avg=5913.51, stdev=2662.15 00:21:18.660 clat (usec): min=884, max=4791, avg=1830.22, stdev=438.64 00:21:18.660 lat (usec): min=887, max=4797, avg=1836.14, stdev=440.41 00:21:18.660 clat percentiles (usec): 00:21:18.660 | 1.00th=[ 1106], 5.00th=[ 1254], 10.00th=[ 1319], 20.00th=[ 1434], 00:21:18.660 | 30.00th=[ 1516], 40.00th=[ 1614], 50.00th=[ 1729], 60.00th=[ 1909], 00:21:18.660 | 70.00th=[ 2114], 80.00th=[ 2245], 90.00th=[ 2442], 95.00th=[ 2606], 00:21:18.660 | 99.00th=[ 2835], 99.50th=[ 3032], 99.90th=[ 3425], 99.95th=[ 3523], 00:21:18.660 | 99.99th=[ 3621] 00:21:18.660 bw ( KiB/s): min=93696, max=163840, per=100.00%, avg=127145.78, stdev=25499.73, samples=9 00:21:18.660 iops : min=23424, max=40960, avg=31786.44, stdev=6374.93, samples=9 00:21:18.660 lat (usec) : 1000=0.18% 00:21:18.660 lat (msec) : 2=64.29%, 4=35.53%, 10=0.01% 00:21:18.660 cpu : usr=49.49%, sys=47.05%, ctx=11, majf=0, minf=763 00:21:18.660 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:21:18.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.660 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:21:18.660 issued rwts: total=0,155254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.660 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:18.660 00:21:18.660 Run status group 0 (all jobs): 00:21:18.660 WRITE: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=606MiB (636MB), run=5002-5002msec 00:21:19.229 ----------------------------------------------------- 00:21:19.229 Suppressions used: 00:21:19.229 count bytes template 00:21:19.229 1 11 /usr/src/fio/parse.c 00:21:19.229 1 8 libtcmalloc_minimal.so 00:21:19.229 1 904 libcrypto.so 00:21:19.229 ----------------------------------------------------- 00:21:19.229 00:21:19.229 00:21:19.229 real 0m15.009s 00:21:19.229 user 0m8.836s 00:21:19.229 sys 0m5.562s 00:21:19.229 13:40:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:19.229 ************************************ 00:21:19.229 END TEST xnvme_fio_plugin 00:21:19.229 13:40:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:19.229 ************************************ 00:21:19.229 13:40:30 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:21:19.229 13:40:30 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:21:19.229 13:40:30 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:21:19.229 13:40:30 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:21:19.229 13:40:30 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:21:19.229 13:40:30 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:21:19.229 13:40:30 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:21:19.229 13:40:30 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:21:19.229 13:40:30 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:21:19.229 13:40:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:19.229 13:40:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:19.229 13:40:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:19.229 ************************************ 00:21:19.229 START TEST xnvme_rpc 00:21:19.229 ************************************ 00:21:19.229 13:40:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:21:19.229 13:40:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:21:19.229 13:40:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:21:19.229 13:40:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:21:19.229 13:40:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:21:19.229 13:40:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72525 00:21:19.229 13:40:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:19.229 13:40:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72525 00:21:19.229 13:40:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72525 ']' 00:21:19.229 13:40:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.229 13:40:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.229 13:40:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.229 13:40:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.229 13:40:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:19.229 [2024-11-20 13:40:31.125044] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:21:19.229 [2024-11-20 13:40:31.125998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72525 ] 00:21:19.489 [2024-11-20 13:40:31.311948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.489 [2024-11-20 13:40:31.429795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.425 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.425 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:20.425 13:40:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:21:20.425 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.425 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:20.425 xnvme_bdev 00:21:20.425 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.425 13:40:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:21:20.425 13:40:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:20.425 13:40:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:21:20.425 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.425 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:20.425 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.425 13:40:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:21:20.425 13:40:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:21:20.425 13:40:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:21:20.425 13:40:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:20.425 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.425 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72525 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72525 ']' 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72525 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72525 00:21:20.684 killing process with pid 72525 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72525' 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72525 00:21:20.684 13:40:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72525 00:21:23.216 00:21:23.216 real 0m3.954s 00:21:23.216 user 0m4.022s 00:21:23.216 sys 0m0.546s 00:21:23.216 13:40:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.216 13:40:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:23.216 ************************************ 00:21:23.216 END TEST xnvme_rpc 00:21:23.216 ************************************ 00:21:23.216 13:40:35 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:21:23.216 13:40:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:23.216 13:40:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.216 13:40:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:23.217 ************************************ 00:21:23.217 START TEST xnvme_bdevperf 00:21:23.217 ************************************ 00:21:23.217 13:40:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:21:23.217 13:40:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:21:23.217 13:40:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:21:23.217 13:40:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:23.217 13:40:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:23.217 13:40:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:21:23.217 13:40:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:23.217 13:40:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:23.217 { 00:21:23.217 "subsystems": [ 00:21:23.217 { 00:21:23.217 "subsystem": "bdev", 00:21:23.217 "config": [ 00:21:23.217 { 00:21:23.217 "params": { 00:21:23.217 "io_mechanism": "io_uring_cmd", 00:21:23.217 "conserve_cpu": false, 00:21:23.217 "filename": "/dev/ng0n1", 00:21:23.217 "name": "xnvme_bdev" 00:21:23.217 }, 00:21:23.217 "method": "bdev_xnvme_create" 00:21:23.217 }, 00:21:23.217 { 00:21:23.217 "method": "bdev_wait_for_examine" 00:21:23.217 } 00:21:23.217 ] 00:21:23.217 } 00:21:23.217 ] 00:21:23.217 } 00:21:23.217 [2024-11-20 13:40:35.135626] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:21:23.217 [2024-11-20 13:40:35.135777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72609 ] 00:21:23.556 [2024-11-20 13:40:35.323511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.556 [2024-11-20 13:40:35.446753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.126 Running I/O for 5 seconds... 00:21:25.996 45248.00 IOPS, 176.75 MiB/s [2024-11-20T13:40:38.890Z] 40704.00 IOPS, 159.00 MiB/s [2024-11-20T13:40:39.840Z] 41663.67 IOPS, 162.75 MiB/s [2024-11-20T13:40:41.219Z] 41231.75 IOPS, 161.06 MiB/s [2024-11-20T13:40:41.219Z] 39974.20 IOPS, 156.15 MiB/s 00:21:29.262 Latency(us) 00:21:29.262 [2024-11-20T13:40:41.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.262 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:21:29.262 xnvme_bdev : 5.01 39918.41 155.93 0.00 0.00 1598.67 792.88 7158.95 00:21:29.262 [2024-11-20T13:40:41.219Z] =================================================================================================================== 00:21:29.262 [2024-11-20T13:40:41.219Z] Total : 39918.41 155.93 0.00 0.00 1598.67 792.88 7158.95 00:21:30.197 13:40:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:30.197 13:40:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:21:30.197 13:40:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:30.197 13:40:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:30.197 13:40:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:30.197 { 00:21:30.197 "subsystems": [ 00:21:30.197 { 00:21:30.197 "subsystem": "bdev", 00:21:30.197 "config": [ 00:21:30.197 { 00:21:30.197 "params": { 00:21:30.197 "io_mechanism": "io_uring_cmd", 00:21:30.197 "conserve_cpu": false, 00:21:30.197 "filename": "/dev/ng0n1", 00:21:30.197 "name": "xnvme_bdev" 00:21:30.197 }, 00:21:30.197 "method": "bdev_xnvme_create" 00:21:30.197 }, 00:21:30.197 { 00:21:30.197 "method": "bdev_wait_for_examine" 00:21:30.197 } 00:21:30.197 ] 00:21:30.197 } 00:21:30.197 ] 00:21:30.197 } 00:21:30.197 [2024-11-20 13:40:42.117192] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:21:30.197 [2024-11-20 13:40:42.117542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72693 ] 00:21:30.455 [2024-11-20 13:40:42.301929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.714 [2024-11-20 13:40:42.421779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.973 Running I/O for 5 seconds... 00:21:32.865 32128.00 IOPS, 125.50 MiB/s [2024-11-20T13:40:46.198Z] 31648.00 IOPS, 123.62 MiB/s [2024-11-20T13:40:47.133Z] 30314.67 IOPS, 118.42 MiB/s [2024-11-20T13:40:48.070Z] 31088.00 IOPS, 121.44 MiB/s [2024-11-20T13:40:48.070Z] 31398.40 IOPS, 122.65 MiB/s 00:21:36.113 Latency(us) 00:21:36.113 [2024-11-20T13:40:48.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.113 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:21:36.113 xnvme_bdev : 5.01 31378.79 122.57 0.00 0.00 2033.35 967.25 7843.26 00:21:36.113 [2024-11-20T13:40:48.070Z] =================================================================================================================== 00:21:36.113 [2024-11-20T13:40:48.070Z] Total : 31378.79 122.57 0.00 0.00 2033.35 967.25 7843.26 00:21:37.051 13:40:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:37.051 13:40:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:21:37.051 13:40:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:37.051 13:40:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:37.051 13:40:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:37.051 { 00:21:37.051 "subsystems": [ 00:21:37.051 { 00:21:37.051 "subsystem": "bdev", 00:21:37.051 "config": [ 00:21:37.051 { 00:21:37.051 "params": { 00:21:37.051 "io_mechanism": "io_uring_cmd", 00:21:37.051 "conserve_cpu": false, 00:21:37.051 "filename": "/dev/ng0n1", 00:21:37.051 "name": "xnvme_bdev" 00:21:37.051 }, 00:21:37.051 "method": "bdev_xnvme_create" 00:21:37.051 }, 00:21:37.051 { 00:21:37.051 "method": "bdev_wait_for_examine" 00:21:37.051 } 00:21:37.051 ] 00:21:37.051 } 00:21:37.051 ] 00:21:37.051 } 00:21:37.051 [2024-11-20 13:40:49.005046] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:21:37.051 [2024-11-20 13:40:49.005186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72774 ] 00:21:37.322 [2024-11-20 13:40:49.194000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.583 [2024-11-20 13:40:49.310534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.841 Running I/O for 5 seconds... 00:21:39.739 69248.00 IOPS, 270.50 MiB/s [2024-11-20T13:40:53.071Z] 69152.00 IOPS, 270.12 MiB/s [2024-11-20T13:40:54.009Z] 69077.33 IOPS, 269.83 MiB/s [2024-11-20T13:40:54.949Z] 69440.00 IOPS, 271.25 MiB/s 00:21:42.992 Latency(us) 00:21:42.992 [2024-11-20T13:40:54.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.992 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:21:42.992 xnvme_bdev : 5.00 69279.18 270.62 0.00 0.00 920.94 526.39 3816.35 00:21:42.992 [2024-11-20T13:40:54.949Z] =================================================================================================================== 00:21:42.992 [2024-11-20T13:40:54.949Z] Total : 69279.18 270.62 0.00 0.00 920.94 526.39 3816.35 00:21:43.935 13:40:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:43.935 13:40:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:21:43.935 13:40:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:43.935 13:40:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:43.935 13:40:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:43.935 { 00:21:43.935 "subsystems": [ 00:21:43.935 { 00:21:43.935 "subsystem": "bdev", 00:21:43.935 "config": [ 00:21:43.935 { 00:21:43.935 "params": { 00:21:43.935 "io_mechanism": "io_uring_cmd", 00:21:43.935 "conserve_cpu": false, 00:21:43.935 "filename": "/dev/ng0n1", 00:21:43.935 "name": "xnvme_bdev" 00:21:43.935 }, 00:21:43.935 "method": "bdev_xnvme_create" 00:21:43.935 }, 00:21:43.935 { 00:21:43.935 "method": "bdev_wait_for_examine" 00:21:43.935 } 00:21:43.935 ] 00:21:43.935 } 00:21:43.935 ] 00:21:43.935 } 00:21:44.194 [2024-11-20 13:40:55.898145] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:21:44.194 [2024-11-20 13:40:55.898339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72848 ] 00:21:44.194 [2024-11-20 13:40:56.108650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.453 [2024-11-20 13:40:56.225949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.711 Running I/O for 5 seconds... 00:21:47.023 52361.00 IOPS, 204.54 MiB/s [2024-11-20T13:40:59.917Z] 48233.00 IOPS, 188.41 MiB/s [2024-11-20T13:41:00.852Z] 46699.67 IOPS, 182.42 MiB/s [2024-11-20T13:41:01.788Z] 45839.75 IOPS, 179.06 MiB/s [2024-11-20T13:41:01.788Z] 45415.40 IOPS, 177.40 MiB/s 00:21:49.831 Latency(us) 00:21:49.831 [2024-11-20T13:41:01.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.831 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:21:49.831 xnvme_bdev : 5.00 45405.20 177.36 0.00 0.00 1406.14 57.99 17476.27 00:21:49.831 [2024-11-20T13:41:01.788Z] =================================================================================================================== 00:21:49.831 [2024-11-20T13:41:01.788Z] Total : 45405.20 177.36 0.00 0.00 1406.14 57.99 17476.27 00:21:51.208 00:21:51.208 real 0m27.699s 00:21:51.208 user 0m14.032s 00:21:51.208 sys 0m13.241s 00:21:51.208 ************************************ 00:21:51.208 END TEST xnvme_bdevperf 00:21:51.208 ************************************ 00:21:51.208 13:41:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:51.208 13:41:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:51.208 13:41:02 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:21:51.208 13:41:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:51.208 13:41:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:51.208 13:41:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:51.208 ************************************ 00:21:51.208 START TEST xnvme_fio_plugin 00:21:51.208 ************************************ 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:51.208 { 00:21:51.208 "subsystems": [ 00:21:51.208 { 00:21:51.208 "subsystem": "bdev", 00:21:51.208 "config": [ 00:21:51.208 { 00:21:51.208 "params": { 00:21:51.208 "io_mechanism": "io_uring_cmd", 00:21:51.208 "conserve_cpu": false, 00:21:51.208 "filename": "/dev/ng0n1", 00:21:51.208 "name": "xnvme_bdev" 00:21:51.208 }, 00:21:51.208 "method": "bdev_xnvme_create" 00:21:51.208 }, 00:21:51.208 { 00:21:51.208 "method": "bdev_wait_for_examine" 00:21:51.208 } 00:21:51.208 ] 00:21:51.208 } 00:21:51.208 ] 00:21:51.208 } 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:51.208 13:41:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:51.208 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:51.208 fio-3.35 00:21:51.208 Starting 1 thread 00:21:57.778 00:21:57.778 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72972: Wed Nov 20 13:41:08 2024 00:21:57.778 read: IOPS=31.8k, BW=124MiB/s (130MB/s)(622MiB/5001msec) 00:21:57.778 slat (usec): min=2, max=113, avg= 5.73, stdev= 2.20 00:21:57.778 clat (usec): min=940, max=3824, avg=1783.44, stdev=324.67 00:21:57.778 lat (usec): min=943, max=3862, avg=1789.17, stdev=325.75 00:21:57.778 clat percentiles (usec): 00:21:57.778 | 1.00th=[ 1074], 5.00th=[ 1237], 10.00th=[ 1369], 20.00th=[ 1516], 00:21:57.778 | 30.00th=[ 1614], 40.00th=[ 1696], 50.00th=[ 1778], 60.00th=[ 1860], 00:21:57.778 | 70.00th=[ 1942], 80.00th=[ 2040], 90.00th=[ 2180], 95.00th=[ 2311], 00:21:57.778 | 99.00th=[ 2573], 99.50th=[ 2704], 99.90th=[ 3130], 99.95th=[ 3326], 00:21:57.778 | 99.99th=[ 3654] 00:21:57.778 bw ( KiB/s): min=111104, max=151040, per=100.00%, avg=128255.56, stdev=12249.21, samples=9 00:21:57.778 iops : min=27776, max=37760, avg=32063.89, stdev=3062.30, samples=9 00:21:57.778 lat (usec) : 1000=0.12% 00:21:57.778 lat (msec) : 2=75.59%, 4=24.28% 00:21:57.778 cpu : usr=34.56%, sys=64.28%, ctx=12, majf=0, minf=762 00:21:57.778 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:21:57.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.778 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:21:57.778 issued rwts: total=159168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.778 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:57.778 00:21:57.778 Run status group 0 (all jobs): 00:21:57.778 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=622MiB (652MB), run=5001-5001msec 00:21:58.346 ----------------------------------------------------- 00:21:58.346 Suppressions used: 00:21:58.346 count bytes template 00:21:58.346 1 11 /usr/src/fio/parse.c 00:21:58.346 1 8 libtcmalloc_minimal.so 00:21:58.346 1 904 libcrypto.so 00:21:58.346 ----------------------------------------------------- 00:21:58.346 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:58.346 { 00:21:58.346 "subsystems": [ 00:21:58.346 { 00:21:58.346 "subsystem": "bdev", 00:21:58.346 "config": [ 00:21:58.346 { 00:21:58.346 "params": { 00:21:58.346 "io_mechanism": "io_uring_cmd", 00:21:58.346 "conserve_cpu": false, 00:21:58.346 "filename": "/dev/ng0n1", 00:21:58.346 "name": "xnvme_bdev" 00:21:58.346 }, 00:21:58.346 "method": "bdev_xnvme_create" 00:21:58.346 }, 00:21:58.346 { 00:21:58.346 "method": "bdev_wait_for_examine" 00:21:58.346 } 00:21:58.346 ] 00:21:58.346 } 00:21:58.346 ] 00:21:58.346 } 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:58.346 13:41:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:58.605 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:58.605 fio-3.35 00:21:58.605 Starting 1 thread 00:22:05.249 00:22:05.249 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73063: Wed Nov 20 13:41:16 2024 00:22:05.249 write: IOPS=30.9k, BW=121MiB/s (127MB/s)(604MiB/5001msec); 0 zone resets 00:22:05.249 slat (usec): min=2, max=260, avg= 6.16, stdev= 2.83 00:22:05.249 clat (usec): min=94, max=5422, avg=1836.23, stdev=436.35 00:22:05.249 lat (usec): min=99, max=5426, avg=1842.40, stdev=437.39 00:22:05.249 clat percentiles (usec): 00:22:05.249 | 1.00th=[ 865], 5.00th=[ 1237], 10.00th=[ 1369], 20.00th=[ 1516], 00:22:05.249 | 30.00th=[ 1614], 40.00th=[ 1713], 50.00th=[ 1795], 60.00th=[ 1893], 00:22:05.249 | 70.00th=[ 1991], 80.00th=[ 2114], 90.00th=[ 2311], 95.00th=[ 2540], 00:22:05.249 | 99.00th=[ 3294], 99.50th=[ 3589], 99.90th=[ 4228], 99.95th=[ 4490], 00:22:05.249 | 99.99th=[ 5014] 00:22:05.249 bw ( KiB/s): min=106496, max=141312, per=100.00%, avg=124896.89, stdev=11239.95, samples=9 00:22:05.249 iops : min=26624, max=35328, avg=31224.22, stdev=2809.99, samples=9 00:22:05.249 lat (usec) : 100=0.01%, 250=0.07%, 500=0.27%, 750=0.41%, 1000=0.55% 00:22:05.249 lat (msec) : 2=69.31%, 4=29.19%, 10=0.21% 00:22:05.249 cpu : usr=36.76%, sys=61.96%, ctx=18, majf=0, minf=763 00:22:05.249 IO depths : 1=1.4%, 2=2.9%, 4=5.8%, 8=11.6%, 16=23.6%, 32=52.8%, >=64=1.9% 00:22:05.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.249 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:22:05.249 issued rwts: total=0,154671,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.249 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:05.249 00:22:05.249 Run status group 0 (all jobs): 00:22:05.249 WRITE: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=604MiB (634MB), run=5001-5001msec 00:22:05.817 ----------------------------------------------------- 00:22:05.818 Suppressions used: 00:22:05.818 count bytes template 00:22:05.818 1 11 /usr/src/fio/parse.c 00:22:05.818 1 8 libtcmalloc_minimal.so 00:22:05.818 1 904 libcrypto.so 00:22:05.818 ----------------------------------------------------- 00:22:05.818 00:22:05.818 00:22:05.818 real 0m14.892s 00:22:05.818 user 0m7.387s 00:22:05.818 sys 0m7.096s 00:22:05.818 13:41:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:05.818 ************************************ 00:22:05.818 END TEST xnvme_fio_plugin 00:22:05.818 13:41:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:05.818 ************************************ 00:22:05.818 13:41:17 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:22:05.818 13:41:17 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:22:05.818 13:41:17 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:22:05.818 13:41:17 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:22:05.818 13:41:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:05.818 13:41:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:05.818 13:41:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:06.076 ************************************ 00:22:06.076 START TEST xnvme_rpc 00:22:06.076 ************************************ 00:22:06.076 13:41:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:22:06.076 13:41:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:22:06.076 13:41:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:22:06.076 13:41:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:22:06.076 13:41:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:22:06.076 13:41:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73159 00:22:06.076 13:41:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:06.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.076 13:41:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73159 00:22:06.076 13:41:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73159 ']' 00:22:06.076 13:41:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.076 13:41:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.076 13:41:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.076 13:41:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.076 13:41:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:06.076 [2024-11-20 13:41:17.892091] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:22:06.076 [2024-11-20 13:41:17.892238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73159 ] 00:22:06.335 [2024-11-20 13:41:18.074852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.335 [2024-11-20 13:41:18.196983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:07.273 xnvme_bdev 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.273 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73159 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73159 ']' 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73159 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73159 00:22:07.531 killing process with pid 73159 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73159' 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73159 00:22:07.531 13:41:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73159 00:22:10.075 00:22:10.075 real 0m4.024s 00:22:10.075 user 0m4.085s 00:22:10.075 sys 0m0.575s 00:22:10.075 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:10.075 ************************************ 00:22:10.075 END TEST xnvme_rpc 00:22:10.075 ************************************ 00:22:10.075 13:41:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:10.075 13:41:21 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:22:10.075 13:41:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:10.075 13:41:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:10.075 13:41:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:10.075 ************************************ 00:22:10.075 START TEST xnvme_bdevperf 00:22:10.075 ************************************ 00:22:10.075 13:41:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:22:10.075 13:41:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:22:10.075 13:41:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:22:10.075 13:41:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:10.075 13:41:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:22:10.075 13:41:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:10.075 13:41:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:10.075 13:41:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:10.075 { 00:22:10.075 "subsystems": [ 00:22:10.075 { 00:22:10.075 "subsystem": "bdev", 00:22:10.075 "config": [ 00:22:10.075 { 00:22:10.075 "params": { 00:22:10.075 "io_mechanism": "io_uring_cmd", 00:22:10.075 "conserve_cpu": true, 00:22:10.075 "filename": "/dev/ng0n1", 00:22:10.075 "name": "xnvme_bdev" 00:22:10.075 }, 00:22:10.075 "method": "bdev_xnvme_create" 00:22:10.075 }, 00:22:10.075 { 00:22:10.075 "method": "bdev_wait_for_examine" 00:22:10.075 } 00:22:10.075 ] 00:22:10.075 } 00:22:10.075 ] 00:22:10.075 } 00:22:10.075 [2024-11-20 13:41:21.975250] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:22:10.075 [2024-11-20 13:41:21.975389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73241 ] 00:22:10.334 [2024-11-20 13:41:22.155325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.334 [2024-11-20 13:41:22.274291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.902 Running I/O for 5 seconds... 00:22:12.770 40832.00 IOPS, 159.50 MiB/s [2024-11-20T13:41:25.663Z] 41568.00 IOPS, 162.38 MiB/s [2024-11-20T13:41:27.040Z] 41386.67 IOPS, 161.67 MiB/s [2024-11-20T13:41:27.607Z] 39967.75 IOPS, 156.12 MiB/s 00:22:15.650 Latency(us) 00:22:15.650 [2024-11-20T13:41:27.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.650 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:22:15.650 xnvme_bdev : 5.00 39892.46 155.83 0.00 0.00 1599.68 740.24 6132.49 00:22:15.650 [2024-11-20T13:41:27.607Z] =================================================================================================================== 00:22:15.650 [2024-11-20T13:41:27.607Z] Total : 39892.46 155.83 0.00 0.00 1599.68 740.24 6132.49 00:22:17.028 13:41:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:17.028 13:41:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:22:17.028 13:41:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:17.028 13:41:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:17.028 13:41:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:17.028 { 00:22:17.028 "subsystems": [ 00:22:17.028 { 00:22:17.028 "subsystem": "bdev", 00:22:17.028 "config": [ 00:22:17.028 { 00:22:17.028 "params": { 00:22:17.028 "io_mechanism": "io_uring_cmd", 00:22:17.028 "conserve_cpu": true, 00:22:17.028 "filename": "/dev/ng0n1", 00:22:17.028 "name": "xnvme_bdev" 00:22:17.028 }, 00:22:17.028 "method": "bdev_xnvme_create" 00:22:17.028 }, 00:22:17.028 { 00:22:17.028 "method": "bdev_wait_for_examine" 00:22:17.028 } 00:22:17.028 ] 00:22:17.028 } 00:22:17.028 ] 00:22:17.028 } 00:22:17.028 [2024-11-20 13:41:28.872353] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:22:17.028 [2024-11-20 13:41:28.872493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73317 ] 00:22:17.287 [2024-11-20 13:41:29.047442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.287 [2024-11-20 13:41:29.179056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.885 Running I/O for 5 seconds... 00:22:19.759 30400.00 IOPS, 118.75 MiB/s [2024-11-20T13:41:32.652Z] 31296.00 IOPS, 122.25 MiB/s [2024-11-20T13:41:33.589Z] 32128.00 IOPS, 125.50 MiB/s [2024-11-20T13:41:34.598Z] 31664.00 IOPS, 123.69 MiB/s [2024-11-20T13:41:34.598Z] 31488.00 IOPS, 123.00 MiB/s 00:22:22.641 Latency(us) 00:22:22.641 [2024-11-20T13:41:34.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.641 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:22:22.641 xnvme_bdev : 5.01 31466.39 122.92 0.00 0.00 2027.78 861.97 6685.20 00:22:22.641 [2024-11-20T13:41:34.598Z] =================================================================================================================== 00:22:22.641 [2024-11-20T13:41:34.598Z] Total : 31466.39 122.92 0.00 0.00 2027.78 861.97 6685.20 00:22:24.021 13:41:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:24.021 13:41:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:22:24.021 13:41:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:24.021 13:41:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:24.021 13:41:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:24.021 { 00:22:24.021 "subsystems": [ 00:22:24.021 { 00:22:24.021 "subsystem": "bdev", 00:22:24.021 "config": [ 00:22:24.021 { 00:22:24.021 "params": { 00:22:24.021 "io_mechanism": "io_uring_cmd", 00:22:24.021 "conserve_cpu": true, 00:22:24.021 "filename": "/dev/ng0n1", 00:22:24.021 "name": "xnvme_bdev" 00:22:24.021 }, 00:22:24.021 "method": "bdev_xnvme_create" 00:22:24.021 }, 00:22:24.021 { 00:22:24.021 "method": "bdev_wait_for_examine" 00:22:24.021 } 00:22:24.021 ] 00:22:24.021 } 00:22:24.021 ] 00:22:24.021 } 00:22:24.021 [2024-11-20 13:41:35.838579] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:22:24.021 [2024-11-20 13:41:35.838791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73397 ] 00:22:24.280 [2024-11-20 13:41:36.034955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.280 [2024-11-20 13:41:36.166089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.850 Running I/O for 5 seconds... 00:22:26.724 69696.00 IOPS, 272.25 MiB/s [2024-11-20T13:41:39.617Z] 68384.00 IOPS, 267.12 MiB/s [2024-11-20T13:41:40.553Z] 74261.33 IOPS, 290.08 MiB/s [2024-11-20T13:41:41.931Z] 74592.00 IOPS, 291.38 MiB/s 00:22:29.974 Latency(us) 00:22:29.974 [2024-11-20T13:41:41.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.974 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:22:29.974 xnvme_bdev : 5.00 72884.99 284.71 0.00 0.00 875.17 399.73 2855.69 00:22:29.974 [2024-11-20T13:41:41.931Z] =================================================================================================================== 00:22:29.974 [2024-11-20T13:41:41.931Z] Total : 72884.99 284.71 0.00 0.00 875.17 399.73 2855.69 00:22:30.961 13:41:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:30.961 13:41:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:22:30.961 13:41:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:30.961 13:41:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:30.961 13:41:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:30.961 { 00:22:30.961 "subsystems": [ 00:22:30.961 { 00:22:30.961 "subsystem": "bdev", 00:22:30.961 "config": [ 00:22:30.961 { 00:22:30.961 "params": { 00:22:30.961 "io_mechanism": "io_uring_cmd", 00:22:30.961 "conserve_cpu": true, 00:22:30.961 "filename": "/dev/ng0n1", 00:22:30.961 "name": "xnvme_bdev" 00:22:30.961 }, 00:22:30.961 "method": "bdev_xnvme_create" 00:22:30.961 }, 00:22:30.961 { 00:22:30.961 "method": "bdev_wait_for_examine" 00:22:30.961 } 00:22:30.961 ] 00:22:30.961 } 00:22:30.961 ] 00:22:30.961 } 00:22:30.961 [2024-11-20 13:41:42.788627] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:22:30.961 [2024-11-20 13:41:42.788774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73474 ] 00:22:31.220 [2024-11-20 13:41:42.973186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.220 [2024-11-20 13:41:43.097937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.785 Running I/O for 5 seconds... 00:22:33.649 57834.00 IOPS, 225.91 MiB/s [2024-11-20T13:41:46.562Z] 58278.50 IOPS, 227.65 MiB/s [2024-11-20T13:41:47.494Z] 57334.67 IOPS, 223.96 MiB/s [2024-11-20T13:41:48.869Z] 57127.00 IOPS, 223.15 MiB/s [2024-11-20T13:41:48.869Z] 55261.40 IOPS, 215.86 MiB/s 00:22:36.912 Latency(us) 00:22:36.912 [2024-11-20T13:41:48.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.912 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:22:36.912 xnvme_bdev : 5.01 55218.26 215.70 0.00 0.00 1154.96 85.13 7685.35 00:22:36.912 [2024-11-20T13:41:48.869Z] =================================================================================================================== 00:22:36.912 [2024-11-20T13:41:48.869Z] Total : 55218.26 215.70 0.00 0.00 1154.96 85.13 7685.35 00:22:37.850 00:22:37.850 real 0m27.755s 00:22:37.850 user 0m17.760s 00:22:37.850 sys 0m8.714s 00:22:37.850 13:41:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.850 13:41:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:37.850 ************************************ 00:22:37.850 END TEST xnvme_bdevperf 00:22:37.850 ************************************ 00:22:37.850 13:41:49 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:22:37.850 13:41:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:37.850 13:41:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.850 13:41:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:37.850 ************************************ 00:22:37.850 START TEST xnvme_fio_plugin 00:22:37.850 ************************************ 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:22:37.850 { 00:22:37.850 "subsystems": [ 00:22:37.850 { 00:22:37.850 "subsystem": "bdev", 00:22:37.850 "config": [ 00:22:37.850 { 00:22:37.850 "params": { 00:22:37.850 "io_mechanism": "io_uring_cmd", 00:22:37.850 "conserve_cpu": true, 00:22:37.850 "filename": "/dev/ng0n1", 00:22:37.850 "name": "xnvme_bdev" 00:22:37.850 }, 00:22:37.850 "method": "bdev_xnvme_create" 00:22:37.850 }, 00:22:37.850 { 00:22:37.850 "method": "bdev_wait_for_examine" 00:22:37.850 } 00:22:37.850 ] 00:22:37.850 } 00:22:37.850 ] 00:22:37.850 } 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:37.850 13:41:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:38.110 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:22:38.110 fio-3.35 00:22:38.110 Starting 1 thread 00:22:44.751 00:22:44.751 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73597: Wed Nov 20 13:41:55 2024 00:22:44.751 read: IOPS=35.6k, BW=139MiB/s (146MB/s)(696MiB/5001msec) 00:22:44.751 slat (usec): min=2, max=110, avg= 5.00, stdev= 2.00 00:22:44.751 clat (usec): min=834, max=4087, avg=1598.52, stdev=301.90 00:22:44.751 lat (usec): min=838, max=4108, avg=1603.52, stdev=302.98 00:22:44.751 clat percentiles (usec): 00:22:44.751 | 1.00th=[ 979], 5.00th=[ 1172], 10.00th=[ 1287], 20.00th=[ 1369], 00:22:44.751 | 30.00th=[ 1434], 40.00th=[ 1483], 50.00th=[ 1549], 60.00th=[ 1614], 00:22:44.751 | 70.00th=[ 1713], 80.00th=[ 1827], 90.00th=[ 2008], 95.00th=[ 2180], 00:22:44.751 | 99.00th=[ 2442], 99.50th=[ 2540], 99.90th=[ 2835], 99.95th=[ 2999], 00:22:44.751 | 99.99th=[ 3949] 00:22:44.751 bw ( KiB/s): min=128256, max=150829, per=98.58%, avg=140382.78, stdev=7673.57, samples=9 00:22:44.751 iops : min=32064, max=37707, avg=35095.56, stdev=1918.33, samples=9 00:22:44.751 lat (usec) : 1000=1.31% 00:22:44.751 lat (msec) : 2=88.24%, 4=10.44%, 10=0.01% 00:22:44.751 cpu : usr=51.18%, sys=46.28%, ctx=35, majf=0, minf=762 00:22:44.751 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:44.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.751 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:22:44.751 issued rwts: total=178048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.751 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.751 00:22:44.751 Run status group 0 (all jobs): 00:22:44.751 READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=696MiB (729MB), run=5001-5001msec 00:22:45.318 ----------------------------------------------------- 00:22:45.318 Suppressions used: 00:22:45.318 count bytes template 00:22:45.318 1 11 /usr/src/fio/parse.c 00:22:45.318 1 8 libtcmalloc_minimal.so 00:22:45.318 1 904 libcrypto.so 00:22:45.318 ----------------------------------------------------- 00:22:45.318 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:45.318 13:41:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:45.318 { 00:22:45.318 "subsystems": [ 00:22:45.318 { 00:22:45.318 "subsystem": "bdev", 00:22:45.318 "config": [ 00:22:45.318 { 00:22:45.318 "params": { 00:22:45.318 "io_mechanism": "io_uring_cmd", 00:22:45.318 "conserve_cpu": true, 00:22:45.318 "filename": "/dev/ng0n1", 00:22:45.318 "name": "xnvme_bdev" 00:22:45.318 }, 00:22:45.318 "method": "bdev_xnvme_create" 00:22:45.318 }, 00:22:45.318 { 00:22:45.318 "method": "bdev_wait_for_examine" 00:22:45.318 } 00:22:45.318 ] 00:22:45.318 } 00:22:45.318 ] 00:22:45.318 } 00:22:45.658 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:22:45.658 fio-3.35 00:22:45.658 Starting 1 thread 00:22:52.228 00:22:52.228 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73693: Wed Nov 20 13:42:03 2024 00:22:52.228 write: IOPS=34.7k, BW=136MiB/s (142MB/s)(679MiB/5002msec); 0 zone resets 00:22:52.228 slat (nsec): min=2541, max=70085, avg=5238.91, stdev=2323.94 00:22:52.228 clat (usec): min=844, max=5350, avg=1635.12, stdev=358.16 00:22:52.228 lat (usec): min=848, max=5356, avg=1640.35, stdev=359.47 00:22:52.228 clat percentiles (usec): 00:22:52.228 | 1.00th=[ 1012], 5.00th=[ 1106], 10.00th=[ 1188], 20.00th=[ 1319], 00:22:52.228 | 30.00th=[ 1434], 40.00th=[ 1516], 50.00th=[ 1614], 60.00th=[ 1696], 00:22:52.228 | 70.00th=[ 1795], 80.00th=[ 1926], 90.00th=[ 2114], 95.00th=[ 2278], 00:22:52.228 | 99.00th=[ 2606], 99.50th=[ 2769], 99.90th=[ 3097], 99.95th=[ 3195], 00:22:52.228 | 99.99th=[ 3425] 00:22:52.228 bw ( KiB/s): min=124928, max=182272, per=100.00%, avg=139888.89, stdev=16948.19, samples=9 00:22:52.228 iops : min=31232, max=45568, avg=34972.22, stdev=4237.05, samples=9 00:22:52.228 lat (usec) : 1000=0.83% 00:22:52.228 lat (msec) : 2=83.88%, 4=15.28%, 10=0.01% 00:22:52.228 cpu : usr=53.15%, sys=44.33%, ctx=10, majf=0, minf=763 00:22:52.228 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:52.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.228 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:22:52.228 issued rwts: total=0,173759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.228 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.228 00:22:52.228 Run status group 0 (all jobs): 00:22:52.228 WRITE: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=679MiB (712MB), run=5002-5002msec 00:22:52.797 ----------------------------------------------------- 00:22:52.797 Suppressions used: 00:22:52.797 count bytes template 00:22:52.797 1 11 /usr/src/fio/parse.c 00:22:52.797 1 8 libtcmalloc_minimal.so 00:22:52.797 1 904 libcrypto.so 00:22:52.797 ----------------------------------------------------- 00:22:52.797 00:22:52.797 00:22:52.797 real 0m14.809s 00:22:52.797 user 0m9.034s 00:22:52.797 sys 0m5.261s 00:22:52.797 13:42:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:52.797 ************************************ 00:22:52.797 END TEST xnvme_fio_plugin 00:22:52.797 ************************************ 00:22:52.797 13:42:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:52.797 13:42:04 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73159 00:22:52.797 13:42:04 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73159 ']' 00:22:52.797 13:42:04 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73159 00:22:52.797 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73159) - No such process 00:22:52.797 Process with pid 73159 is not found 00:22:52.797 13:42:04 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73159 is not found' 00:22:52.797 00:22:52.797 real 3m54.130s 00:22:52.797 user 2m8.945s 00:22:52.797 sys 1m28.512s 00:22:52.797 13:42:04 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:52.797 13:42:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:52.797 ************************************ 00:22:52.797 END TEST nvme_xnvme 00:22:52.797 ************************************ 00:22:52.797 13:42:04 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:22:52.797 13:42:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:52.797 13:42:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:52.797 13:42:04 -- common/autotest_common.sh@10 -- # set +x 00:22:52.797 ************************************ 00:22:52.797 START TEST blockdev_xnvme 00:22:52.797 ************************************ 00:22:52.797 13:42:04 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:22:53.057 * Looking for test storage... 00:22:53.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:22:53.057 13:42:04 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:53.057 13:42:04 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:22:53.057 13:42:04 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:53.057 13:42:04 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:53.057 13:42:04 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:22:53.057 13:42:04 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:53.057 13:42:04 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:53.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.057 --rc genhtml_branch_coverage=1 00:22:53.057 --rc genhtml_function_coverage=1 00:22:53.057 --rc genhtml_legend=1 00:22:53.057 --rc geninfo_all_blocks=1 00:22:53.057 --rc geninfo_unexecuted_blocks=1 00:22:53.057 00:22:53.057 ' 00:22:53.057 13:42:04 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:53.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.057 --rc genhtml_branch_coverage=1 00:22:53.057 --rc genhtml_function_coverage=1 00:22:53.057 --rc genhtml_legend=1 00:22:53.057 --rc geninfo_all_blocks=1 00:22:53.057 --rc geninfo_unexecuted_blocks=1 00:22:53.057 00:22:53.057 ' 00:22:53.057 13:42:04 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:53.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.057 --rc genhtml_branch_coverage=1 00:22:53.057 --rc genhtml_function_coverage=1 00:22:53.057 --rc genhtml_legend=1 00:22:53.057 --rc geninfo_all_blocks=1 00:22:53.057 --rc geninfo_unexecuted_blocks=1 00:22:53.057 00:22:53.057 ' 00:22:53.057 13:42:04 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:53.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.057 --rc genhtml_branch_coverage=1 00:22:53.057 --rc genhtml_function_coverage=1 00:22:53.057 --rc genhtml_legend=1 00:22:53.057 --rc geninfo_all_blocks=1 00:22:53.057 --rc geninfo_unexecuted_blocks=1 00:22:53.057 00:22:53.057 ' 00:22:53.057 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:53.057 13:42:04 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:22:53.057 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:22:53.057 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:53.057 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:22:53.057 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:22:53.057 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:22:53.057 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:22:53.057 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:22:53.057 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:22:53.057 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:22:53.057 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:22:53.057 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:22:53.057 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:22:53.057 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:22:53.057 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:22:53.057 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:22:53.057 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:22:53.058 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:22:53.058 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:22:53.058 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:22:53.058 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:22:53.058 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:22:53.058 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:22:53.058 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73827 00:22:53.058 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:53.058 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73827 00:22:53.058 13:42:04 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:22:53.058 13:42:04 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73827 ']' 00:22:53.058 13:42:04 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.058 13:42:04 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.058 13:42:04 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.058 13:42:04 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.058 13:42:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:53.058 [2024-11-20 13:42:05.001028] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:22:53.058 [2024-11-20 13:42:05.001166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73827 ] 00:22:53.318 [2024-11-20 13:42:05.189960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.577 [2024-11-20 13:42:05.342929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.513 13:42:06 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.513 13:42:06 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:22:54.513 13:42:06 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:22:54.513 13:42:06 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:22:54.513 13:42:06 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:22:54.513 13:42:06 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:22:54.513 13:42:06 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:55.082 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:55.650 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:55.650 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:55.650 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:22:55.910 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:22:55.910 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.910 13:42:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:55.911 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:22:55.911 nvme0n1 00:22:55.911 nvme0n2 00:22:55.911 nvme0n3 00:22:55.911 nvme1n1 00:22:55.911 nvme2n1 00:22:55.911 nvme3n1 00:22:55.911 13:42:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.911 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:22:55.911 13:42:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.911 13:42:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:55.911 13:42:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.911 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:22:55.911 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:22:55.911 13:42:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.911 13:42:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:55.911 13:42:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.911 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:22:55.911 13:42:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.911 13:42:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:55.911 13:42:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.911 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:22:55.911 13:42:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.911 13:42:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:55.911 13:42:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.911 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:22:55.911 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:22:55.911 13:42:07 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.911 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:22:55.911 13:42:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:56.171 13:42:07 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.171 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:22:56.171 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:22:56.172 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3318f6a1-e3b2-4ef0-a383-e5ef5b5950dc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3318f6a1-e3b2-4ef0-a383-e5ef5b5950dc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "cad7afbb-d56a-440a-bd90-6c0da0cb465c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cad7afbb-d56a-440a-bd90-6c0da0cb465c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "1f507975-dee4-4031-853b-ee318c4045cf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1f507975-dee4-4031-853b-ee318c4045cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "848fb9e7-fb99-4296-9460-7b9e1949f001"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "848fb9e7-fb99-4296-9460-7b9e1949f001",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "6d78ac57-dd6d-40bc-82a7-7c98e7accda5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6d78ac57-dd6d-40bc-82a7-7c98e7accda5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "8eca6275-a8cd-4d4e-a655-9a6a32ee3170"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8eca6275-a8cd-4d4e-a655-9a6a32ee3170",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:22:56.172 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:22:56.172 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:22:56.172 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:22:56.172 13:42:07 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73827 00:22:56.172 13:42:07 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73827 ']' 00:22:56.172 13:42:07 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73827 00:22:56.172 13:42:07 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:22:56.172 13:42:07 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:56.172 13:42:07 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73827 00:22:56.172 13:42:07 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:56.172 13:42:07 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:56.172 killing process with pid 73827 00:22:56.172 13:42:07 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73827' 00:22:56.172 13:42:07 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73827 00:22:56.172 13:42:07 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73827 00:22:58.706 13:42:10 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:58.706 13:42:10 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:22:58.706 13:42:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:58.706 13:42:10 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:58.706 13:42:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:58.706 ************************************ 00:22:58.706 START TEST bdev_hello_world 00:22:58.706 ************************************ 00:22:58.706 13:42:10 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:22:58.706 [2024-11-20 13:42:10.516892] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:22:58.706 [2024-11-20 13:42:10.517038] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74128 ] 00:22:58.965 [2024-11-20 13:42:10.702390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.965 [2024-11-20 13:42:10.821734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.532 [2024-11-20 13:42:11.265191] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:22:59.532 [2024-11-20 13:42:11.265257] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:22:59.532 [2024-11-20 13:42:11.265284] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:22:59.532 [2024-11-20 13:42:11.267535] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:22:59.532 [2024-11-20 13:42:11.268099] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:22:59.532 [2024-11-20 13:42:11.268133] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:22:59.532 [2024-11-20 13:42:11.268379] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:22:59.532 00:22:59.532 [2024-11-20 13:42:11.268410] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:23:00.909 00:23:00.909 real 0m2.009s 00:23:00.909 user 0m1.631s 00:23:00.909 sys 0m0.260s 00:23:00.909 13:42:12 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.909 ************************************ 00:23:00.909 13:42:12 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:23:00.909 END TEST bdev_hello_world 00:23:00.909 ************************************ 00:23:00.909 13:42:12 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:23:00.909 13:42:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:00.909 13:42:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.909 13:42:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:00.909 ************************************ 00:23:00.909 START TEST bdev_bounds 00:23:00.909 ************************************ 00:23:00.909 13:42:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:23:00.909 Process bdevio pid: 74166 00:23:00.909 13:42:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74166 00:23:00.909 13:42:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:23:00.909 13:42:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74166' 00:23:00.909 13:42:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74166 00:23:00.909 13:42:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74166 ']' 00:23:00.909 13:42:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:00.909 13:42:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.909 13:42:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.909 13:42:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.909 13:42:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.909 13:42:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:00.909 [2024-11-20 13:42:12.614324] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:00.909 [2024-11-20 13:42:12.614467] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74166 ] 00:23:00.909 [2024-11-20 13:42:12.800471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:01.169 [2024-11-20 13:42:12.942310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.169 [2024-11-20 13:42:12.942469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.169 [2024-11-20 13:42:12.942493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.737 13:42:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:01.737 13:42:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:23:01.737 13:42:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:23:01.737 I/O targets: 00:23:01.737 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:23:01.737 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:23:01.737 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:23:01.737 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:23:01.737 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:23:01.737 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:23:01.737 00:23:01.737 00:23:01.737 CUnit - A unit testing framework for C - Version 2.1-3 00:23:01.737 http://cunit.sourceforge.net/ 00:23:01.737 00:23:01.737 00:23:01.737 Suite: bdevio tests on: nvme3n1 00:23:01.737 Test: blockdev write read block ...passed 00:23:01.737 Test: blockdev write zeroes read block ...passed 00:23:01.737 Test: blockdev write zeroes read no split ...passed 00:23:01.737 Test: blockdev write zeroes read split ...passed 00:23:01.737 Test: blockdev write zeroes read split partial ...passed 00:23:01.737 Test: blockdev reset ...passed 00:23:01.737 Test: blockdev write read 8 blocks ...passed 00:23:01.737 Test: blockdev write read size > 128k ...passed 00:23:01.737 Test: blockdev write read invalid size ...passed 00:23:01.737 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:01.737 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:01.737 Test: blockdev write read max offset ...passed 00:23:01.737 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:01.737 Test: blockdev writev readv 8 blocks ...passed 00:23:01.737 Test: blockdev writev readv 30 x 1block ...passed 00:23:01.737 Test: blockdev writev readv block ...passed 00:23:01.737 Test: blockdev writev readv size > 128k ...passed 00:23:01.737 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:01.737 Test: blockdev comparev and writev ...passed 00:23:01.737 Test: blockdev nvme passthru rw ...passed 00:23:01.737 Test: blockdev nvme passthru vendor specific ...passed 00:23:01.737 Test: blockdev nvme admin passthru ...passed 00:23:01.737 Test: blockdev copy ...passed 00:23:01.737 Suite: bdevio tests on: nvme2n1 00:23:01.737 Test: blockdev write read block ...passed 00:23:01.737 Test: blockdev write zeroes read block ...passed 00:23:01.737 Test: blockdev write zeroes read no split ...passed 00:23:01.995 Test: blockdev write zeroes read split ...passed 00:23:01.995 Test: blockdev write zeroes read split partial ...passed 00:23:01.995 Test: blockdev reset ...passed 00:23:01.995 Test: blockdev write read 8 blocks ...passed 00:23:01.995 Test: blockdev write read size > 128k ...passed 00:23:01.995 Test: blockdev write read invalid size ...passed 00:23:01.995 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:01.995 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:01.995 Test: blockdev write read max offset ...passed 00:23:01.995 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:01.995 Test: blockdev writev readv 8 blocks ...passed 00:23:01.995 Test: blockdev writev readv 30 x 1block ...passed 00:23:01.995 Test: blockdev writev readv block ...passed 00:23:01.995 Test: blockdev writev readv size > 128k ...passed 00:23:01.995 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:01.995 Test: blockdev comparev and writev ...passed 00:23:01.995 Test: blockdev nvme passthru rw ...passed 00:23:01.995 Test: blockdev nvme passthru vendor specific ...passed 00:23:01.995 Test: blockdev nvme admin passthru ...passed 00:23:01.995 Test: blockdev copy ...passed 00:23:01.995 Suite: bdevio tests on: nvme1n1 00:23:01.995 Test: blockdev write read block ...passed 00:23:01.995 Test: blockdev write zeroes read block ...passed 00:23:01.995 Test: blockdev write zeroes read no split ...passed 00:23:01.995 Test: blockdev write zeroes read split ...passed 00:23:01.995 Test: blockdev write zeroes read split partial ...passed 00:23:01.995 Test: blockdev reset ...passed 00:23:01.995 Test: blockdev write read 8 blocks ...passed 00:23:01.995 Test: blockdev write read size > 128k ...passed 00:23:01.995 Test: blockdev write read invalid size ...passed 00:23:01.995 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:01.995 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:01.995 Test: blockdev write read max offset ...passed 00:23:01.995 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:01.995 Test: blockdev writev readv 8 blocks ...passed 00:23:01.995 Test: blockdev writev readv 30 x 1block ...passed 00:23:01.995 Test: blockdev writev readv block ...passed 00:23:01.995 Test: blockdev writev readv size > 128k ...passed 00:23:01.995 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:01.995 Test: blockdev comparev and writev ...passed 00:23:01.995 Test: blockdev nvme passthru rw ...passed 00:23:01.995 Test: blockdev nvme passthru vendor specific ...passed 00:23:01.995 Test: blockdev nvme admin passthru ...passed 00:23:01.995 Test: blockdev copy ...passed 00:23:01.995 Suite: bdevio tests on: nvme0n3 00:23:01.995 Test: blockdev write read block ...passed 00:23:01.995 Test: blockdev write zeroes read block ...passed 00:23:01.995 Test: blockdev write zeroes read no split ...passed 00:23:01.995 Test: blockdev write zeroes read split ...passed 00:23:01.995 Test: blockdev write zeroes read split partial ...passed 00:23:01.995 Test: blockdev reset ...passed 00:23:01.995 Test: blockdev write read 8 blocks ...passed 00:23:01.995 Test: blockdev write read size > 128k ...passed 00:23:01.995 Test: blockdev write read invalid size ...passed 00:23:01.995 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:01.995 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:01.995 Test: blockdev write read max offset ...passed 00:23:01.995 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:01.995 Test: blockdev writev readv 8 blocks ...passed 00:23:01.995 Test: blockdev writev readv 30 x 1block ...passed 00:23:01.995 Test: blockdev writev readv block ...passed 00:23:01.995 Test: blockdev writev readv size > 128k ...passed 00:23:01.995 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:01.995 Test: blockdev comparev and writev ...passed 00:23:01.995 Test: blockdev nvme passthru rw ...passed 00:23:01.995 Test: blockdev nvme passthru vendor specific ...passed 00:23:01.995 Test: blockdev nvme admin passthru ...passed 00:23:01.995 Test: blockdev copy ...passed 00:23:01.995 Suite: bdevio tests on: nvme0n2 00:23:01.995 Test: blockdev write read block ...passed 00:23:01.995 Test: blockdev write zeroes read block ...passed 00:23:01.995 Test: blockdev write zeroes read no split ...passed 00:23:02.258 Test: blockdev write zeroes read split ...passed 00:23:02.258 Test: blockdev write zeroes read split partial ...passed 00:23:02.258 Test: blockdev reset ...passed 00:23:02.258 Test: blockdev write read 8 blocks ...passed 00:23:02.258 Test: blockdev write read size > 128k ...passed 00:23:02.258 Test: blockdev write read invalid size ...passed 00:23:02.258 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:02.258 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:02.258 Test: blockdev write read max offset ...passed 00:23:02.258 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:02.258 Test: blockdev writev readv 8 blocks ...passed 00:23:02.258 Test: blockdev writev readv 30 x 1block ...passed 00:23:02.258 Test: blockdev writev readv block ...passed 00:23:02.258 Test: blockdev writev readv size > 128k ...passed 00:23:02.258 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:02.258 Test: blockdev comparev and writev ...passed 00:23:02.258 Test: blockdev nvme passthru rw ...passed 00:23:02.258 Test: blockdev nvme passthru vendor specific ...passed 00:23:02.258 Test: blockdev nvme admin passthru ...passed 00:23:02.258 Test: blockdev copy ...passed 00:23:02.258 Suite: bdevio tests on: nvme0n1 00:23:02.258 Test: blockdev write read block ...passed 00:23:02.258 Test: blockdev write zeroes read block ...passed 00:23:02.258 Test: blockdev write zeroes read no split ...passed 00:23:02.258 Test: blockdev write zeroes read split ...passed 00:23:02.258 Test: blockdev write zeroes read split partial ...passed 00:23:02.258 Test: blockdev reset ...passed 00:23:02.258 Test: blockdev write read 8 blocks ...passed 00:23:02.258 Test: blockdev write read size > 128k ...passed 00:23:02.258 Test: blockdev write read invalid size ...passed 00:23:02.258 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:02.258 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:02.258 Test: blockdev write read max offset ...passed 00:23:02.258 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:02.258 Test: blockdev writev readv 8 blocks ...passed 00:23:02.258 Test: blockdev writev readv 30 x 1block ...passed 00:23:02.258 Test: blockdev writev readv block ...passed 00:23:02.258 Test: blockdev writev readv size > 128k ...passed 00:23:02.258 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:02.258 Test: blockdev comparev and writev ...passed 00:23:02.258 Test: blockdev nvme passthru rw ...passed 00:23:02.258 Test: blockdev nvme passthru vendor specific ...passed 00:23:02.258 Test: blockdev nvme admin passthru ...passed 00:23:02.258 Test: blockdev copy ...passed 00:23:02.258 00:23:02.258 Run Summary: Type Total Ran Passed Failed Inactive 00:23:02.258 suites 6 6 n/a 0 0 00:23:02.258 tests 138 138 138 0 0 00:23:02.258 asserts 780 780 780 0 n/a 00:23:02.258 00:23:02.258 Elapsed time = 1.483 seconds 00:23:02.258 0 00:23:02.258 13:42:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74166 00:23:02.258 13:42:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74166 ']' 00:23:02.258 13:42:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74166 00:23:02.258 13:42:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:23:02.258 13:42:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.258 13:42:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74166 00:23:02.258 13:42:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:02.258 13:42:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:02.258 13:42:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74166' 00:23:02.258 killing process with pid 74166 00:23:02.258 13:42:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74166 00:23:02.258 13:42:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74166 00:23:03.636 13:42:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:23:03.636 00:23:03.636 real 0m2.885s 00:23:03.636 user 0m7.116s 00:23:03.636 sys 0m0.431s 00:23:03.636 13:42:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:03.636 13:42:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:03.636 ************************************ 00:23:03.636 END TEST bdev_bounds 00:23:03.636 ************************************ 00:23:03.636 13:42:15 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:23:03.636 13:42:15 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:03.636 13:42:15 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:03.636 13:42:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:03.636 ************************************ 00:23:03.636 START TEST bdev_nbd 00:23:03.636 ************************************ 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74230 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74230 /var/tmp/spdk-nbd.sock 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74230 ']' 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.636 13:42:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:03.636 [2024-11-20 13:42:15.589161] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:03.636 [2024-11-20 13:42:15.589298] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.896 [2024-11-20 13:42:15.777793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.155 [2024-11-20 13:42:15.904382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.723 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:04.723 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:23:04.723 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:23:04.723 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:04.723 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:04.723 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:23:04.723 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:23:04.723 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:04.723 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:04.723 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:23:04.723 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:23:04.723 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:23:04.723 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:23:04.723 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:04.723 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:23:04.981 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:23:04.981 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:23:04.981 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:23:04.981 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:04.981 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:04.982 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:04.982 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:04.982 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:04.982 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:04.982 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:04.982 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:04.982 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:04.982 1+0 records in 00:23:04.982 1+0 records out 00:23:04.982 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470053 s, 8.7 MB/s 00:23:04.982 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.982 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:04.982 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.982 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:04.982 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:04.982 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:04.982 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:04.982 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:23:05.240 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:23:05.240 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:23:05.240 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:23:05.240 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:05.240 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:05.240 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:05.240 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:05.241 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:05.241 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:05.241 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:05.241 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:05.241 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:05.241 1+0 records in 00:23:05.241 1+0 records out 00:23:05.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679551 s, 6.0 MB/s 00:23:05.241 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.241 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:05.241 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.241 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:05.241 13:42:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:05.241 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:05.241 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:05.241 13:42:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:05.499 1+0 records in 00:23:05.499 1+0 records out 00:23:05.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000709387 s, 5.8 MB/s 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:05.499 13:42:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:05.758 1+0 records in 00:23:05.758 1+0 records out 00:23:05.758 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000965665 s, 4.2 MB/s 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:05.758 13:42:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:23:06.016 13:42:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:23:06.017 13:42:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:23:06.017 13:42:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:23:06.017 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:23:06.017 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:06.017 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:06.017 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:06.017 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:23:06.017 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:06.017 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:06.017 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:06.017 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:06.017 1+0 records in 00:23:06.017 1+0 records out 00:23:06.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000804961 s, 5.1 MB/s 00:23:06.017 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.017 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:06.017 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.017 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:06.017 13:42:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:06.017 13:42:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:06.017 13:42:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:06.017 13:42:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:23:06.275 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:23:06.275 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:23:06.275 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:23:06.275 13:42:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:23:06.275 13:42:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:06.276 13:42:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:06.276 13:42:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:06.276 13:42:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:23:06.276 13:42:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:06.276 13:42:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:06.276 13:42:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:06.276 13:42:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:06.276 1+0 records in 00:23:06.276 1+0 records out 00:23:06.276 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710959 s, 5.8 MB/s 00:23:06.276 13:42:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.276 13:42:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:06.276 13:42:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.276 13:42:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:06.276 13:42:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:06.276 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:06.276 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:06.276 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:06.534 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:23:06.534 { 00:23:06.534 "nbd_device": "/dev/nbd0", 00:23:06.534 "bdev_name": "nvme0n1" 00:23:06.534 }, 00:23:06.534 { 00:23:06.534 "nbd_device": "/dev/nbd1", 00:23:06.534 "bdev_name": "nvme0n2" 00:23:06.534 }, 00:23:06.534 { 00:23:06.534 "nbd_device": "/dev/nbd2", 00:23:06.534 "bdev_name": "nvme0n3" 00:23:06.534 }, 00:23:06.534 { 00:23:06.534 "nbd_device": "/dev/nbd3", 00:23:06.534 "bdev_name": "nvme1n1" 00:23:06.534 }, 00:23:06.534 { 00:23:06.534 "nbd_device": "/dev/nbd4", 00:23:06.534 "bdev_name": "nvme2n1" 00:23:06.534 }, 00:23:06.534 { 00:23:06.534 "nbd_device": "/dev/nbd5", 00:23:06.534 "bdev_name": "nvme3n1" 00:23:06.534 } 00:23:06.534 ]' 00:23:06.534 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:23:06.534 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:23:06.534 { 00:23:06.534 "nbd_device": "/dev/nbd0", 00:23:06.534 "bdev_name": "nvme0n1" 00:23:06.534 }, 00:23:06.534 { 00:23:06.534 "nbd_device": "/dev/nbd1", 00:23:06.534 "bdev_name": "nvme0n2" 00:23:06.534 }, 00:23:06.534 { 00:23:06.534 "nbd_device": "/dev/nbd2", 00:23:06.534 "bdev_name": "nvme0n3" 00:23:06.534 }, 00:23:06.534 { 00:23:06.534 "nbd_device": "/dev/nbd3", 00:23:06.534 "bdev_name": "nvme1n1" 00:23:06.534 }, 00:23:06.534 { 00:23:06.534 "nbd_device": "/dev/nbd4", 00:23:06.534 "bdev_name": "nvme2n1" 00:23:06.534 }, 00:23:06.534 { 00:23:06.534 "nbd_device": "/dev/nbd5", 00:23:06.534 "bdev_name": "nvme3n1" 00:23:06.534 } 00:23:06.534 ]' 00:23:06.534 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:23:06.534 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:23:06.534 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:06.534 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:23:06.534 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:06.534 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:06.534 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:06.534 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:06.792 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:06.792 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:06.792 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:06.792 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:06.792 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:06.792 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:06.792 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:06.792 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:06.792 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:06.792 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:07.050 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:07.050 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:07.050 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:07.050 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:07.050 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:07.050 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:07.050 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:07.050 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:07.050 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:07.050 13:42:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:23:07.308 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:23:07.308 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:23:07.308 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:23:07.308 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:07.308 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:07.308 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:23:07.308 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:07.308 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:07.308 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:07.308 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:23:07.565 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:23:07.565 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:23:07.565 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:23:07.565 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:07.565 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:07.565 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:23:07.565 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:07.565 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:07.565 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:07.565 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:23:07.824 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:23:07.824 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:23:07.824 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:23:07.824 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:07.824 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:07.824 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:23:07.824 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:07.824 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:07.825 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:07.825 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:23:08.084 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:23:08.084 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:23:08.084 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:23:08.084 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:08.084 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:08.084 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:23:08.084 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:08.084 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:08.084 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:08.084 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:08.084 13:42:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:08.343 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:23:08.602 /dev/nbd0 00:23:08.602 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:08.602 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:08.602 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:08.602 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:08.602 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:08.602 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:08.602 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:08.602 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:08.602 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:08.602 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:08.602 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:08.602 1+0 records in 00:23:08.602 1+0 records out 00:23:08.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432497 s, 9.5 MB/s 00:23:08.602 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:08.602 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:08.602 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:08.602 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:08.602 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:08.602 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:08.602 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:08.602 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:23:08.860 /dev/nbd1 00:23:08.860 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:08.860 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:08.860 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:08.860 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:08.860 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:08.860 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:08.861 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:08.861 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:08.861 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:08.861 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:08.861 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:08.861 1+0 records in 00:23:08.861 1+0 records out 00:23:08.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501983 s, 8.2 MB/s 00:23:08.861 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:08.861 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:08.861 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:08.861 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:08.861 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:08.861 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:08.861 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:08.861 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:23:09.120 /dev/nbd10 00:23:09.120 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:23:09.120 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:23:09.120 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:23:09.120 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:09.120 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:09.120 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:09.120 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:23:09.120 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:09.120 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:09.120 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:09.120 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:09.120 1+0 records in 00:23:09.120 1+0 records out 00:23:09.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00064949 s, 6.3 MB/s 00:23:09.120 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.120 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:09.120 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.120 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:09.120 13:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:09.120 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:09.120 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:09.120 13:42:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:23:09.379 /dev/nbd11 00:23:09.379 13:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:23:09.379 13:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:23:09.379 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:23:09.379 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:09.379 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:09.379 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:09.379 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:23:09.379 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:09.379 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:09.379 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:09.379 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:09.379 1+0 records in 00:23:09.379 1+0 records out 00:23:09.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000744189 s, 5.5 MB/s 00:23:09.379 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.379 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:09.379 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.379 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:09.379 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:09.379 13:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:09.379 13:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:09.379 13:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:23:09.639 /dev/nbd12 00:23:09.639 13:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:23:09.639 13:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:23:09.639 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:23:09.639 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:09.639 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:09.639 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:09.639 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:23:09.639 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:09.639 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:09.639 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:09.639 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:09.639 1+0 records in 00:23:09.639 1+0 records out 00:23:09.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000661176 s, 6.2 MB/s 00:23:09.639 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.639 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:09.639 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.639 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:09.639 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:09.639 13:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:09.639 13:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:09.639 13:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:23:09.899 /dev/nbd13 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:09.899 1+0 records in 00:23:09.899 1+0 records out 00:23:09.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000872828 s, 4.7 MB/s 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:09.899 13:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:10.158 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:10.158 { 00:23:10.158 "nbd_device": "/dev/nbd0", 00:23:10.158 "bdev_name": "nvme0n1" 00:23:10.158 }, 00:23:10.158 { 00:23:10.158 "nbd_device": "/dev/nbd1", 00:23:10.158 "bdev_name": "nvme0n2" 00:23:10.158 }, 00:23:10.158 { 00:23:10.158 "nbd_device": "/dev/nbd10", 00:23:10.158 "bdev_name": "nvme0n3" 00:23:10.158 }, 00:23:10.158 { 00:23:10.158 "nbd_device": "/dev/nbd11", 00:23:10.158 "bdev_name": "nvme1n1" 00:23:10.158 }, 00:23:10.158 { 00:23:10.158 "nbd_device": "/dev/nbd12", 00:23:10.158 "bdev_name": "nvme2n1" 00:23:10.158 }, 00:23:10.158 { 00:23:10.158 "nbd_device": "/dev/nbd13", 00:23:10.158 "bdev_name": "nvme3n1" 00:23:10.158 } 00:23:10.158 ]' 00:23:10.158 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:10.158 { 00:23:10.158 "nbd_device": "/dev/nbd0", 00:23:10.158 "bdev_name": "nvme0n1" 00:23:10.158 }, 00:23:10.158 { 00:23:10.158 "nbd_device": "/dev/nbd1", 00:23:10.158 "bdev_name": "nvme0n2" 00:23:10.158 }, 00:23:10.158 { 00:23:10.158 "nbd_device": "/dev/nbd10", 00:23:10.158 "bdev_name": "nvme0n3" 00:23:10.158 }, 00:23:10.158 { 00:23:10.158 "nbd_device": "/dev/nbd11", 00:23:10.158 "bdev_name": "nvme1n1" 00:23:10.158 }, 00:23:10.158 { 00:23:10.158 "nbd_device": "/dev/nbd12", 00:23:10.158 "bdev_name": "nvme2n1" 00:23:10.158 }, 00:23:10.158 { 00:23:10.158 "nbd_device": "/dev/nbd13", 00:23:10.158 "bdev_name": "nvme3n1" 00:23:10.158 } 00:23:10.158 ]' 00:23:10.158 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:10.158 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:10.158 /dev/nbd1 00:23:10.158 /dev/nbd10 00:23:10.158 /dev/nbd11 00:23:10.158 /dev/nbd12 00:23:10.158 /dev/nbd13' 00:23:10.158 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:10.158 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:10.158 /dev/nbd1 00:23:10.158 /dev/nbd10 00:23:10.158 /dev/nbd11 00:23:10.158 /dev/nbd12 00:23:10.158 /dev/nbd13' 00:23:10.158 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:23:10.158 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:23:10.158 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:23:10.158 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:23:10.158 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:23:10.158 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:10.158 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:10.158 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:10.158 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:10.158 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:10.158 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:23:10.158 256+0 records in 00:23:10.158 256+0 records out 00:23:10.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136778 s, 76.7 MB/s 00:23:10.158 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:10.158 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:10.417 256+0 records in 00:23:10.417 256+0 records out 00:23:10.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124163 s, 8.4 MB/s 00:23:10.417 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:10.417 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:10.417 256+0 records in 00:23:10.417 256+0 records out 00:23:10.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123125 s, 8.5 MB/s 00:23:10.417 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:10.417 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:23:10.676 256+0 records in 00:23:10.676 256+0 records out 00:23:10.676 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124852 s, 8.4 MB/s 00:23:10.676 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:10.676 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:23:10.676 256+0 records in 00:23:10.676 256+0 records out 00:23:10.676 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119455 s, 8.8 MB/s 00:23:10.676 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:10.676 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:23:10.934 256+0 records in 00:23:10.934 256+0 records out 00:23:10.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123985 s, 8.5 MB/s 00:23:10.934 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:10.934 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:23:11.218 256+0 records in 00:23:11.218 256+0 records out 00:23:11.218 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153824 s, 6.8 MB/s 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:11.218 13:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:11.477 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:11.477 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:11.477 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:11.477 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:11.477 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:11.477 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:11.477 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:11.477 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:11.477 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:11.477 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:11.737 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:11.737 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:11.737 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:11.737 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:11.737 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:11.737 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:11.737 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:11.737 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:11.737 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:11.737 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:23:11.996 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:23:11.996 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:23:11.996 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:23:11.996 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:11.996 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:11.996 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:23:11.996 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:11.996 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:11.996 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:11.996 13:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:23:12.255 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:23:12.255 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:23:12.255 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:23:12.255 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:12.255 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:12.255 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:23:12.255 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:12.255 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:12.255 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:12.255 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:23:12.514 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:23:12.514 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:23:12.514 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:23:12.514 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:12.514 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:12.514 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:23:12.514 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:12.514 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:12.514 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:12.514 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:23:12.773 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:23:12.773 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:23:12.773 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:23:12.773 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:12.773 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:12.773 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:23:12.773 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:12.773 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:12.773 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:12.773 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:12.773 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:13.032 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:13.032 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:13.032 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:13.032 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:13.032 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:13.032 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:13.032 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:13.032 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:13.032 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:13.032 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:23:13.032 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:13.032 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:23:13.032 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:13.032 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:13.032 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:23:13.032 13:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:23:13.292 malloc_lvol_verify 00:23:13.292 13:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:23:13.551 7064fb33-675c-42cf-b3b8-6a674a7152fc 00:23:13.551 13:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:23:13.811 b6e0b886-9301-4797-af68-358e44c43919 00:23:13.811 13:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:23:14.070 /dev/nbd0 00:23:14.070 13:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:23:14.070 13:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:23:14.070 13:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:23:14.070 13:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:23:14.070 13:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:23:14.070 mke2fs 1.47.0 (5-Feb-2023) 00:23:14.070 Discarding device blocks: 0/4096 done 00:23:14.070 Creating filesystem with 4096 1k blocks and 1024 inodes 00:23:14.070 00:23:14.070 Allocating group tables: 0/1 done 00:23:14.070 Writing inode tables: 0/1 done 00:23:14.070 Creating journal (1024 blocks): done 00:23:14.070 Writing superblocks and filesystem accounting information: 0/1 done 00:23:14.070 00:23:14.070 13:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:14.070 13:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:14.070 13:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:14.070 13:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:14.070 13:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:14.071 13:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:14.071 13:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:14.329 13:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:14.329 13:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:14.329 13:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:14.329 13:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:14.329 13:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:14.329 13:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:14.329 13:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:14.329 13:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:14.329 13:42:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74230 00:23:14.329 13:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74230 ']' 00:23:14.329 13:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74230 00:23:14.329 13:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:23:14.329 13:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:14.329 13:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74230 00:23:14.329 13:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:14.329 13:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:14.329 13:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74230' 00:23:14.329 killing process with pid 74230 00:23:14.329 13:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74230 00:23:14.329 13:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74230 00:23:15.709 13:42:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:23:15.709 00:23:15.709 real 0m11.889s 00:23:15.709 user 0m15.467s 00:23:15.709 sys 0m5.057s 00:23:15.709 ************************************ 00:23:15.709 END TEST bdev_nbd 00:23:15.709 ************************************ 00:23:15.709 13:42:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:15.709 13:42:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:15.709 13:42:27 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:23:15.709 13:42:27 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:23:15.709 13:42:27 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:23:15.709 13:42:27 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:23:15.709 13:42:27 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:15.709 13:42:27 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:15.709 13:42:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:15.709 ************************************ 00:23:15.709 START TEST bdev_fio 00:23:15.709 ************************************ 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:23:15.709 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:15.709 ************************************ 00:23:15.709 START TEST bdev_fio_rw_verify 00:23:15.709 ************************************ 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:15.709 13:42:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:15.968 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:15.968 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:15.968 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:15.968 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:15.968 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:15.968 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:15.968 fio-3.35 00:23:15.968 Starting 6 threads 00:23:28.173 00:23:28.174 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74644: Wed Nov 20 13:42:38 2024 00:23:28.174 read: IOPS=31.5k, BW=123MiB/s (129MB/s)(1229MiB/10001msec) 00:23:28.174 slat (usec): min=2, max=894, avg= 7.03, stdev= 6.02 00:23:28.174 clat (usec): min=112, max=4651, avg=567.32, stdev=261.11 00:23:28.174 lat (usec): min=116, max=4659, avg=574.35, stdev=262.08 00:23:28.174 clat percentiles (usec): 00:23:28.174 | 50.000th=[ 545], 99.000th=[ 1352], 99.900th=[ 2343], 99.990th=[ 3818], 00:23:28.174 | 99.999th=[ 4113] 00:23:28.174 write: IOPS=32.0k, BW=125MiB/s (131MB/s)(1249MiB/10001msec); 0 zone resets 00:23:28.174 slat (usec): min=11, max=3507, avg=28.30, stdev=39.06 00:23:28.174 clat (usec): min=83, max=4718, avg=668.82, stdev=300.80 00:23:28.174 lat (usec): min=101, max=4779, avg=697.11, stdev=307.35 00:23:28.174 clat percentiles (usec): 00:23:28.174 | 50.000th=[ 635], 99.000th=[ 1647], 99.900th=[ 2638], 99.990th=[ 4047], 00:23:28.174 | 99.999th=[ 4424] 00:23:28.174 bw ( KiB/s): min=98818, max=158500, per=99.59%, avg=127321.42, stdev=2841.25, samples=114 00:23:28.174 iops : min=24702, max=39625, avg=31829.63, stdev=710.37, samples=114 00:23:28.174 lat (usec) : 100=0.01%, 250=6.21%, 500=29.43%, 750=37.65%, 1000=19.50% 00:23:28.174 lat (msec) : 2=6.94%, 4=0.27%, 10=0.01% 00:23:28.174 cpu : usr=54.65%, sys=29.66%, ctx=8283, majf=0, minf=26499 00:23:28.174 IO depths : 1=11.9%, 2=24.4%, 4=50.6%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:28.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.174 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.174 issued rwts: total=314704,319658,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:28.174 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:28.174 00:23:28.174 Run status group 0 (all jobs): 00:23:28.174 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=1229MiB (1289MB), run=10001-10001msec 00:23:28.174 WRITE: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=1249MiB (1309MB), run=10001-10001msec 00:23:28.433 ----------------------------------------------------- 00:23:28.433 Suppressions used: 00:23:28.433 count bytes template 00:23:28.433 6 48 /usr/src/fio/parse.c 00:23:28.433 4740 455040 /usr/src/fio/iolog.c 00:23:28.433 1 8 libtcmalloc_minimal.so 00:23:28.433 1 904 libcrypto.so 00:23:28.433 ----------------------------------------------------- 00:23:28.433 00:23:28.433 00:23:28.433 real 0m12.785s 00:23:28.433 user 0m34.981s 00:23:28.433 sys 0m18.265s 00:23:28.433 13:42:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:28.433 13:42:40 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:23:28.433 ************************************ 00:23:28.433 END TEST bdev_fio_rw_verify 00:23:28.433 ************************************ 00:23:28.433 13:42:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:23:28.433 13:42:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:28.433 13:42:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:23:28.433 13:42:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:28.433 13:42:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:23:28.433 13:42:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:23:28.433 13:42:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:28.433 13:42:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:28.433 13:42:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:28.433 13:42:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:23:28.433 13:42:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:28.433 13:42:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:28.692 13:42:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:28.692 13:42:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:23:28.692 13:42:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:23:28.692 13:42:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:23:28.692 13:42:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:23:28.693 13:42:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3318f6a1-e3b2-4ef0-a383-e5ef5b5950dc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3318f6a1-e3b2-4ef0-a383-e5ef5b5950dc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "cad7afbb-d56a-440a-bd90-6c0da0cb465c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cad7afbb-d56a-440a-bd90-6c0da0cb465c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "1f507975-dee4-4031-853b-ee318c4045cf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1f507975-dee4-4031-853b-ee318c4045cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "848fb9e7-fb99-4296-9460-7b9e1949f001"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "848fb9e7-fb99-4296-9460-7b9e1949f001",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "6d78ac57-dd6d-40bc-82a7-7c98e7accda5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6d78ac57-dd6d-40bc-82a7-7c98e7accda5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "8eca6275-a8cd-4d4e-a655-9a6a32ee3170"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8eca6275-a8cd-4d4e-a655-9a6a32ee3170",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:23:28.693 13:42:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:23:28.693 13:42:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:28.693 /home/vagrant/spdk_repo/spdk 00:23:28.693 13:42:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:23:28.693 13:42:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:23:28.693 13:42:40 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:23:28.693 00:23:28.693 real 0m13.013s 00:23:28.693 user 0m35.097s 00:23:28.693 sys 0m18.382s 00:23:28.693 13:42:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:28.693 13:42:40 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:28.693 ************************************ 00:23:28.693 END TEST bdev_fio 00:23:28.693 ************************************ 00:23:28.693 13:42:40 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:28.693 13:42:40 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:28.693 13:42:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:28.693 13:42:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:28.693 13:42:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:28.693 ************************************ 00:23:28.693 START TEST bdev_verify 00:23:28.693 ************************************ 00:23:28.693 13:42:40 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:28.693 [2024-11-20 13:42:40.627589] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:28.693 [2024-11-20 13:42:40.627734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74825 ] 00:23:28.951 [2024-11-20 13:42:40.809341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:29.210 [2024-11-20 13:42:40.947709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.210 [2024-11-20 13:42:40.947738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.777 Running I/O for 5 seconds... 00:23:32.090 26048.00 IOPS, 101.75 MiB/s [2024-11-20T13:42:45.028Z] 24240.00 IOPS, 94.69 MiB/s [2024-11-20T13:42:45.964Z] 23872.00 IOPS, 93.25 MiB/s [2024-11-20T13:42:46.900Z] 24055.25 IOPS, 93.97 MiB/s [2024-11-20T13:42:46.900Z] 23244.40 IOPS, 90.80 MiB/s 00:23:34.943 Latency(us) 00:23:34.943 [2024-11-20T13:42:46.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.943 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:34.943 Verification LBA range: start 0x0 length 0x80000 00:23:34.943 nvme0n1 : 5.08 1815.30 7.09 0.00 0.00 70409.12 10580.51 88855.24 00:23:34.943 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:34.943 Verification LBA range: start 0x80000 length 0x80000 00:23:34.943 nvme0n1 : 5.05 1722.59 6.73 0.00 0.00 74192.79 11896.49 70747.30 00:23:34.943 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:34.943 Verification LBA range: start 0x0 length 0x80000 00:23:34.943 nvme0n2 : 5.06 1795.04 7.01 0.00 0.00 71093.90 15581.25 76221.79 00:23:34.943 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:34.943 Verification LBA range: start 0x80000 length 0x80000 00:23:34.943 nvme0n2 : 5.05 1725.22 6.74 0.00 0.00 73973.74 12633.45 70747.30 00:23:34.943 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:34.943 Verification LBA range: start 0x0 length 0x80000 00:23:34.943 nvme0n3 : 5.07 1793.55 7.01 0.00 0.00 71058.59 13423.04 74958.44 00:23:34.943 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:34.943 Verification LBA range: start 0x80000 length 0x80000 00:23:34.943 nvme0n3 : 5.06 1719.10 6.72 0.00 0.00 74129.50 15581.25 69062.84 00:23:34.943 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:34.943 Verification LBA range: start 0x0 length 0x20000 00:23:34.943 nvme1n1 : 5.07 1792.54 7.00 0.00 0.00 71015.25 11106.90 77906.25 00:23:34.943 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:34.943 Verification LBA range: start 0x20000 length 0x20000 00:23:34.943 nvme1n1 : 5.07 1717.59 6.71 0.00 0.00 74094.92 15475.97 67799.49 00:23:34.943 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:34.943 Verification LBA range: start 0x0 length 0xa0000 00:23:34.943 nvme2n1 : 5.05 1799.60 7.03 0.00 0.00 70629.65 9633.00 92224.15 00:23:34.943 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:34.943 Verification LBA range: start 0xa0000 length 0xa0000 00:23:34.943 nvme2n1 : 5.08 1714.61 6.70 0.00 0.00 74111.99 10369.95 78327.36 00:23:34.943 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:34.943 Verification LBA range: start 0x0 length 0xbd0bd 00:23:34.943 nvme3n1 : 5.08 2788.34 10.89 0.00 0.00 45460.87 4605.94 63167.23 00:23:34.943 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:34.943 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:23:34.943 nvme3n1 : 5.08 2577.22 10.07 0.00 0.00 49138.67 5027.06 59798.31 00:23:34.943 [2024-11-20T13:42:46.900Z] =================================================================================================================== 00:23:34.943 [2024-11-20T13:42:46.900Z] Total : 22960.71 89.69 0.00 0.00 66526.75 4605.94 92224.15 00:23:36.322 00:23:36.322 real 0m7.330s 00:23:36.322 user 0m11.282s 00:23:36.322 sys 0m2.093s 00:23:36.322 ************************************ 00:23:36.322 END TEST bdev_verify 00:23:36.322 ************************************ 00:23:36.322 13:42:47 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.322 13:42:47 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:23:36.322 13:42:47 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:36.322 13:42:47 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:36.322 13:42:47 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:36.322 13:42:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:36.322 ************************************ 00:23:36.322 START TEST bdev_verify_big_io 00:23:36.322 ************************************ 00:23:36.322 13:42:47 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:36.322 [2024-11-20 13:42:48.036133] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:36.322 [2024-11-20 13:42:48.036272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74929 ] 00:23:36.322 [2024-11-20 13:42:48.224861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:36.582 [2024-11-20 13:42:48.341921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.582 [2024-11-20 13:42:48.341947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.153 Running I/O for 5 seconds... 00:23:41.587 1472.00 IOPS, 92.00 MiB/s [2024-11-20T13:42:54.924Z] 2874.50 IOPS, 179.66 MiB/s [2024-11-20T13:42:54.924Z] 3045.00 IOPS, 190.31 MiB/s 00:23:42.967 Latency(us) 00:23:42.967 [2024-11-20T13:42:54.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.967 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:42.967 Verification LBA range: start 0x0 length 0x8000 00:23:42.967 nvme0n1 : 5.52 142.32 8.90 0.00 0.00 861478.71 18423.78 2129156.73 00:23:42.967 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:42.967 Verification LBA range: start 0x8000 length 0x8000 00:23:42.967 nvme0n1 : 5.53 170.67 10.67 0.00 0.00 732434.84 24003.55 943297.29 00:23:42.967 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:42.967 Verification LBA range: start 0x0 length 0x8000 00:23:42.967 nvme0n2 : 5.57 176.79 11.05 0.00 0.00 688214.45 74116.22 677152.69 00:23:42.967 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:42.967 Verification LBA range: start 0x8000 length 0x8000 00:23:42.967 nvme0n2 : 5.53 157.96 9.87 0.00 0.00 771406.29 99804.22 869181.07 00:23:42.967 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:42.967 Verification LBA range: start 0x0 length 0x8000 00:23:42.967 nvme0n3 : 5.57 205.37 12.84 0.00 0.00 581631.40 42532.60 596298.64 00:23:42.967 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:42.967 Verification LBA range: start 0x8000 length 0x8000 00:23:42.967 nvme0n3 : 5.49 151.59 9.47 0.00 0.00 788438.88 57692.74 956772.96 00:23:42.967 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:42.967 Verification LBA range: start 0x0 length 0x2000 00:23:42.967 nvme1n1 : 5.64 173.08 10.82 0.00 0.00 669898.82 64430.57 1334091.87 00:23:42.967 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:42.967 Verification LBA range: start 0x2000 length 0x2000 00:23:42.967 nvme1n1 : 5.61 142.65 8.92 0.00 0.00 809635.17 104857.60 1684459.44 00:23:42.967 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:42.967 Verification LBA range: start 0x0 length 0xa000 00:23:42.967 nvme2n1 : 5.65 150.11 9.38 0.00 0.00 754832.43 14739.02 1630556.74 00:23:42.967 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:42.967 Verification LBA range: start 0xa000 length 0xa000 00:23:42.967 nvme2n1 : 5.74 144.71 9.04 0.00 0.00 783680.29 47796.54 2102205.38 00:23:42.967 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:42.967 Verification LBA range: start 0x0 length 0xbd0b 00:23:42.967 nvme3n1 : 5.79 196.12 12.26 0.00 0.00 566465.79 3092.56 1677721.60 00:23:42.967 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:42.967 Verification LBA range: start 0xbd0b length 0xbd0b 00:23:42.967 nvme3n1 : 5.76 257.99 16.12 0.00 0.00 431422.15 3947.95 576085.13 00:23:42.967 [2024-11-20T13:42:54.925Z] =================================================================================================================== 00:23:42.968 [2024-11-20T13:42:54.925Z] Total : 2069.36 129.34 0.00 0.00 680261.99 3092.56 2129156.73 00:23:44.347 00:23:44.347 real 0m8.244s 00:23:44.347 user 0m14.876s 00:23:44.347 sys 0m0.624s 00:23:44.347 13:42:56 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:44.347 13:42:56 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:23:44.347 ************************************ 00:23:44.347 END TEST bdev_verify_big_io 00:23:44.347 ************************************ 00:23:44.347 13:42:56 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:44.347 13:42:56 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:44.347 13:42:56 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:44.347 13:42:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:44.347 ************************************ 00:23:44.347 START TEST bdev_write_zeroes 00:23:44.347 ************************************ 00:23:44.347 13:42:56 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:44.606 [2024-11-20 13:42:56.359335] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:44.606 [2024-11-20 13:42:56.359470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75039 ] 00:23:44.606 [2024-11-20 13:42:56.547287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.866 [2024-11-20 13:42:56.673147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.434 Running I/O for 1 seconds... 00:23:46.372 50400.00 IOPS, 196.88 MiB/s 00:23:46.372 Latency(us) 00:23:46.372 [2024-11-20T13:42:58.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.372 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:46.372 nvme0n1 : 1.03 7860.39 30.70 0.00 0.00 16270.29 8159.10 27372.47 00:23:46.372 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:46.372 nvme0n2 : 1.03 7853.37 30.68 0.00 0.00 16273.46 8159.10 28004.14 00:23:46.372 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:46.372 nvme0n3 : 1.03 7845.29 30.65 0.00 0.00 16278.70 8211.74 28425.25 00:23:46.372 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:46.372 nvme1n1 : 1.03 7836.90 30.61 0.00 0.00 16285.13 8264.38 28846.37 00:23:46.372 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:46.372 nvme2n1 : 1.03 7828.42 30.58 0.00 0.00 16291.61 8264.38 29056.93 00:23:46.372 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:46.372 nvme3n1 : 1.02 11209.95 43.79 0.00 0.00 11366.59 4421.71 23266.60 00:23:46.372 [2024-11-20T13:42:58.329Z] =================================================================================================================== 00:23:46.372 [2024-11-20T13:42:58.329Z] Total : 50434.32 197.01 0.00 0.00 15190.37 4421.71 29056.93 00:23:47.751 00:23:47.751 real 0m3.079s 00:23:47.751 user 0m2.298s 00:23:47.751 sys 0m0.598s 00:23:47.751 13:42:59 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.751 13:42:59 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:23:47.751 ************************************ 00:23:47.751 END TEST bdev_write_zeroes 00:23:47.751 ************************************ 00:23:47.751 13:42:59 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:47.751 13:42:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:47.751 13:42:59 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.751 13:42:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:47.751 ************************************ 00:23:47.751 START TEST bdev_json_nonenclosed 00:23:47.751 ************************************ 00:23:47.751 13:42:59 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:47.751 [2024-11-20 13:42:59.516013] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:47.751 [2024-11-20 13:42:59.516146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75098 ] 00:23:47.751 [2024-11-20 13:42:59.700489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.010 [2024-11-20 13:42:59.824070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.010 [2024-11-20 13:42:59.824172] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:23:48.010 [2024-11-20 13:42:59.824195] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:48.010 [2024-11-20 13:42:59.824207] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:48.269 00:23:48.269 real 0m0.684s 00:23:48.269 user 0m0.427s 00:23:48.269 sys 0m0.151s 00:23:48.269 13:43:00 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.269 ************************************ 00:23:48.269 END TEST bdev_json_nonenclosed 00:23:48.269 ************************************ 00:23:48.269 13:43:00 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:23:48.269 13:43:00 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:48.269 13:43:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:48.269 13:43:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:48.269 13:43:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:48.269 ************************************ 00:23:48.269 START TEST bdev_json_nonarray 00:23:48.269 ************************************ 00:23:48.269 13:43:00 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:48.528 [2024-11-20 13:43:00.268065] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:48.528 [2024-11-20 13:43:00.268368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75125 ] 00:23:48.528 [2024-11-20 13:43:00.449551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.788 [2024-11-20 13:43:00.574329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.788 [2024-11-20 13:43:00.574448] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:23:48.788 [2024-11-20 13:43:00.574472] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:48.788 [2024-11-20 13:43:00.574485] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:49.047 00:23:49.047 real 0m0.668s 00:23:49.047 user 0m0.411s 00:23:49.047 sys 0m0.150s 00:23:49.047 13:43:00 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:49.047 13:43:00 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:23:49.047 ************************************ 00:23:49.047 END TEST bdev_json_nonarray 00:23:49.047 ************************************ 00:23:49.047 13:43:00 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:23:49.047 13:43:00 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:23:49.047 13:43:00 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:23:49.047 13:43:00 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:23:49.047 13:43:00 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:23:49.047 13:43:00 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:23:49.047 13:43:00 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:49.047 13:43:00 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:23:49.047 13:43:00 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:23:49.047 13:43:00 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:23:49.047 13:43:00 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:23:49.047 13:43:00 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:49.983 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:55.256 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:55.256 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:57.793 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:23:57.793 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:23:57.793 ************************************ 00:23:57.793 END TEST blockdev_xnvme 00:23:57.793 ************************************ 00:23:57.793 00:23:57.793 real 1m5.032s 00:23:57.793 user 1m35.789s 00:23:57.793 sys 0m40.822s 00:23:57.793 13:43:09 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:57.793 13:43:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:57.793 13:43:09 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:23:57.793 13:43:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:57.793 13:43:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:57.793 13:43:09 -- common/autotest_common.sh@10 -- # set +x 00:23:57.793 ************************************ 00:23:57.793 START TEST ublk 00:23:57.793 ************************************ 00:23:57.793 13:43:09 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:23:58.054 * Looking for test storage... 00:23:58.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:23:58.054 13:43:09 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:58.054 13:43:09 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:23:58.054 13:43:09 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:58.054 13:43:09 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:58.054 13:43:09 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:58.054 13:43:09 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:58.054 13:43:09 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:58.054 13:43:09 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:23:58.054 13:43:09 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:23:58.054 13:43:09 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:23:58.054 13:43:09 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:23:58.054 13:43:09 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:23:58.054 13:43:09 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:23:58.054 13:43:09 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:23:58.054 13:43:09 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:58.054 13:43:09 ublk -- scripts/common.sh@344 -- # case "$op" in 00:23:58.054 13:43:09 ublk -- scripts/common.sh@345 -- # : 1 00:23:58.054 13:43:09 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:58.054 13:43:09 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:58.054 13:43:09 ublk -- scripts/common.sh@365 -- # decimal 1 00:23:58.054 13:43:09 ublk -- scripts/common.sh@353 -- # local d=1 00:23:58.054 13:43:09 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:58.054 13:43:09 ublk -- scripts/common.sh@355 -- # echo 1 00:23:58.054 13:43:09 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:23:58.054 13:43:09 ublk -- scripts/common.sh@366 -- # decimal 2 00:23:58.054 13:43:09 ublk -- scripts/common.sh@353 -- # local d=2 00:23:58.054 13:43:09 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:58.054 13:43:09 ublk -- scripts/common.sh@355 -- # echo 2 00:23:58.054 13:43:09 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:23:58.054 13:43:09 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:58.054 13:43:09 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:58.054 13:43:09 ublk -- scripts/common.sh@368 -- # return 0 00:23:58.054 13:43:09 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:58.054 13:43:09 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:58.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.054 --rc genhtml_branch_coverage=1 00:23:58.054 --rc genhtml_function_coverage=1 00:23:58.054 --rc genhtml_legend=1 00:23:58.055 --rc geninfo_all_blocks=1 00:23:58.055 --rc geninfo_unexecuted_blocks=1 00:23:58.055 00:23:58.055 ' 00:23:58.055 13:43:09 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:58.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.055 --rc genhtml_branch_coverage=1 00:23:58.055 --rc genhtml_function_coverage=1 00:23:58.055 --rc genhtml_legend=1 00:23:58.055 --rc geninfo_all_blocks=1 00:23:58.055 --rc geninfo_unexecuted_blocks=1 00:23:58.055 00:23:58.055 ' 00:23:58.055 13:43:09 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:58.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.055 --rc genhtml_branch_coverage=1 00:23:58.055 --rc genhtml_function_coverage=1 00:23:58.055 --rc genhtml_legend=1 00:23:58.055 --rc geninfo_all_blocks=1 00:23:58.055 --rc geninfo_unexecuted_blocks=1 00:23:58.055 00:23:58.055 ' 00:23:58.055 13:43:09 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:58.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.055 --rc genhtml_branch_coverage=1 00:23:58.055 --rc genhtml_function_coverage=1 00:23:58.055 --rc genhtml_legend=1 00:23:58.055 --rc geninfo_all_blocks=1 00:23:58.055 --rc geninfo_unexecuted_blocks=1 00:23:58.055 00:23:58.055 ' 00:23:58.055 13:43:09 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:23:58.055 13:43:09 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:23:58.055 13:43:09 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:23:58.055 13:43:09 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:23:58.055 13:43:09 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:23:58.055 13:43:09 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:23:58.055 13:43:09 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:23:58.055 13:43:09 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:23:58.055 13:43:09 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:23:58.055 13:43:09 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:23:58.055 13:43:09 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:23:58.055 13:43:09 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:23:58.055 13:43:09 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:23:58.055 13:43:09 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:23:58.055 13:43:09 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:23:58.055 13:43:09 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:23:58.055 13:43:09 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:23:58.055 13:43:09 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:23:58.055 13:43:09 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:23:58.373 13:43:10 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:23:58.373 13:43:10 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:58.373 13:43:10 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:58.373 13:43:10 ublk -- common/autotest_common.sh@10 -- # set +x 00:23:58.373 ************************************ 00:23:58.373 START TEST test_save_ublk_config 00:23:58.373 ************************************ 00:23:58.373 13:43:10 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:23:58.373 13:43:10 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:23:58.373 13:43:10 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75430 00:23:58.373 13:43:10 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:23:58.373 13:43:10 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:23:58.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.373 13:43:10 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75430 00:23:58.373 13:43:10 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75430 ']' 00:23:58.373 13:43:10 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.373 13:43:10 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:58.373 13:43:10 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.373 13:43:10 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:58.373 13:43:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:58.373 [2024-11-20 13:43:10.159032] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:58.373 [2024-11-20 13:43:10.159172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75430 ] 00:23:58.631 [2024-11-20 13:43:10.330660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.631 [2024-11-20 13:43:10.501156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.009 13:43:11 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.009 13:43:11 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:24:00.009 13:43:11 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:24:00.009 13:43:11 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:24:00.009 13:43:11 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.009 13:43:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:00.009 [2024-11-20 13:43:11.559628] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:00.009 [2024-11-20 13:43:11.560963] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:00.009 malloc0 00:24:00.009 [2024-11-20 13:43:11.655820] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:24:00.009 [2024-11-20 13:43:11.655965] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:24:00.009 [2024-11-20 13:43:11.655983] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:24:00.009 [2024-11-20 13:43:11.655995] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:24:00.009 [2024-11-20 13:43:11.664751] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:00.009 [2024-11-20 13:43:11.664783] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:00.009 [2024-11-20 13:43:11.671646] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:00.009 [2024-11-20 13:43:11.671774] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:24:00.009 [2024-11-20 13:43:11.688629] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:24:00.009 0 00:24:00.009 13:43:11 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.009 13:43:11 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:24:00.009 13:43:11 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.009 13:43:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:00.267 13:43:12 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.267 13:43:12 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:24:00.267 "subsystems": [ 00:24:00.267 { 00:24:00.267 "subsystem": "fsdev", 00:24:00.267 "config": [ 00:24:00.267 { 00:24:00.267 "method": "fsdev_set_opts", 00:24:00.267 "params": { 00:24:00.267 "fsdev_io_pool_size": 65535, 00:24:00.267 "fsdev_io_cache_size": 256 00:24:00.267 } 00:24:00.267 } 00:24:00.267 ] 00:24:00.267 }, 00:24:00.267 { 00:24:00.267 "subsystem": "keyring", 00:24:00.267 "config": [] 00:24:00.267 }, 00:24:00.267 { 00:24:00.267 "subsystem": "iobuf", 00:24:00.267 "config": [ 00:24:00.267 { 00:24:00.267 "method": "iobuf_set_options", 00:24:00.267 "params": { 00:24:00.267 "small_pool_count": 8192, 00:24:00.267 "large_pool_count": 1024, 00:24:00.267 "small_bufsize": 8192, 00:24:00.267 "large_bufsize": 135168, 00:24:00.267 "enable_numa": false 00:24:00.267 } 00:24:00.267 } 00:24:00.267 ] 00:24:00.267 }, 00:24:00.267 { 00:24:00.267 "subsystem": "sock", 00:24:00.267 "config": [ 00:24:00.267 { 00:24:00.268 "method": "sock_set_default_impl", 00:24:00.268 "params": { 00:24:00.268 "impl_name": "posix" 00:24:00.268 } 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "method": "sock_impl_set_options", 00:24:00.268 "params": { 00:24:00.268 "impl_name": "ssl", 00:24:00.268 "recv_buf_size": 4096, 00:24:00.268 "send_buf_size": 4096, 00:24:00.268 "enable_recv_pipe": true, 00:24:00.268 "enable_quickack": false, 00:24:00.268 "enable_placement_id": 0, 00:24:00.268 "enable_zerocopy_send_server": true, 00:24:00.268 "enable_zerocopy_send_client": false, 00:24:00.268 "zerocopy_threshold": 0, 00:24:00.268 "tls_version": 0, 00:24:00.268 "enable_ktls": false 00:24:00.268 } 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "method": "sock_impl_set_options", 00:24:00.268 "params": { 00:24:00.268 "impl_name": "posix", 00:24:00.268 "recv_buf_size": 2097152, 00:24:00.268 "send_buf_size": 2097152, 00:24:00.268 "enable_recv_pipe": true, 00:24:00.268 "enable_quickack": false, 00:24:00.268 "enable_placement_id": 0, 00:24:00.268 "enable_zerocopy_send_server": true, 00:24:00.268 "enable_zerocopy_send_client": false, 00:24:00.268 "zerocopy_threshold": 0, 00:24:00.268 "tls_version": 0, 00:24:00.268 "enable_ktls": false 00:24:00.268 } 00:24:00.268 } 00:24:00.268 ] 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "subsystem": "vmd", 00:24:00.268 "config": [] 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "subsystem": "accel", 00:24:00.268 "config": [ 00:24:00.268 { 00:24:00.268 "method": "accel_set_options", 00:24:00.268 "params": { 00:24:00.268 "small_cache_size": 128, 00:24:00.268 "large_cache_size": 16, 00:24:00.268 "task_count": 2048, 00:24:00.268 "sequence_count": 2048, 00:24:00.268 "buf_count": 2048 00:24:00.268 } 00:24:00.268 } 00:24:00.268 ] 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "subsystem": "bdev", 00:24:00.268 "config": [ 00:24:00.268 { 00:24:00.268 "method": "bdev_set_options", 00:24:00.268 "params": { 00:24:00.268 "bdev_io_pool_size": 65535, 00:24:00.268 "bdev_io_cache_size": 256, 00:24:00.268 "bdev_auto_examine": true, 00:24:00.268 "iobuf_small_cache_size": 128, 00:24:00.268 "iobuf_large_cache_size": 16 00:24:00.268 } 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "method": "bdev_raid_set_options", 00:24:00.268 "params": { 00:24:00.268 "process_window_size_kb": 1024, 00:24:00.268 "process_max_bandwidth_mb_sec": 0 00:24:00.268 } 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "method": "bdev_iscsi_set_options", 00:24:00.268 "params": { 00:24:00.268 "timeout_sec": 30 00:24:00.268 } 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "method": "bdev_nvme_set_options", 00:24:00.268 "params": { 00:24:00.268 "action_on_timeout": "none", 00:24:00.268 "timeout_us": 0, 00:24:00.268 "timeout_admin_us": 0, 00:24:00.268 "keep_alive_timeout_ms": 10000, 00:24:00.268 "arbitration_burst": 0, 00:24:00.268 "low_priority_weight": 0, 00:24:00.268 "medium_priority_weight": 0, 00:24:00.268 "high_priority_weight": 0, 00:24:00.268 "nvme_adminq_poll_period_us": 10000, 00:24:00.268 "nvme_ioq_poll_period_us": 0, 00:24:00.268 "io_queue_requests": 0, 00:24:00.268 "delay_cmd_submit": true, 00:24:00.268 "transport_retry_count": 4, 00:24:00.268 "bdev_retry_count": 3, 00:24:00.268 "transport_ack_timeout": 0, 00:24:00.268 "ctrlr_loss_timeout_sec": 0, 00:24:00.268 "reconnect_delay_sec": 0, 00:24:00.268 "fast_io_fail_timeout_sec": 0, 00:24:00.268 "disable_auto_failback": false, 00:24:00.268 "generate_uuids": false, 00:24:00.268 "transport_tos": 0, 00:24:00.268 "nvme_error_stat": false, 00:24:00.268 "rdma_srq_size": 0, 00:24:00.268 "io_path_stat": false, 00:24:00.268 "allow_accel_sequence": false, 00:24:00.268 "rdma_max_cq_size": 0, 00:24:00.268 "rdma_cm_event_timeout_ms": 0, 00:24:00.268 "dhchap_digests": [ 00:24:00.268 "sha256", 00:24:00.268 "sha384", 00:24:00.268 "sha512" 00:24:00.268 ], 00:24:00.268 "dhchap_dhgroups": [ 00:24:00.268 "null", 00:24:00.268 "ffdhe2048", 00:24:00.268 "ffdhe3072", 00:24:00.268 "ffdhe4096", 00:24:00.268 "ffdhe6144", 00:24:00.268 "ffdhe8192" 00:24:00.268 ] 00:24:00.268 } 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "method": "bdev_nvme_set_hotplug", 00:24:00.268 "params": { 00:24:00.268 "period_us": 100000, 00:24:00.268 "enable": false 00:24:00.268 } 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "method": "bdev_malloc_create", 00:24:00.268 "params": { 00:24:00.268 "name": "malloc0", 00:24:00.268 "num_blocks": 8192, 00:24:00.268 "block_size": 4096, 00:24:00.268 "physical_block_size": 4096, 00:24:00.268 "uuid": "0639d1bb-7893-432b-a309-9e4be00cf433", 00:24:00.268 "optimal_io_boundary": 0, 00:24:00.268 "md_size": 0, 00:24:00.268 "dif_type": 0, 00:24:00.268 "dif_is_head_of_md": false, 00:24:00.268 "dif_pi_format": 0 00:24:00.268 } 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "method": "bdev_wait_for_examine" 00:24:00.268 } 00:24:00.268 ] 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "subsystem": "scsi", 00:24:00.268 "config": null 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "subsystem": "scheduler", 00:24:00.268 "config": [ 00:24:00.268 { 00:24:00.268 "method": "framework_set_scheduler", 00:24:00.268 "params": { 00:24:00.268 "name": "static" 00:24:00.268 } 00:24:00.268 } 00:24:00.268 ] 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "subsystem": "vhost_scsi", 00:24:00.268 "config": [] 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "subsystem": "vhost_blk", 00:24:00.268 "config": [] 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "subsystem": "ublk", 00:24:00.268 "config": [ 00:24:00.268 { 00:24:00.268 "method": "ublk_create_target", 00:24:00.268 "params": { 00:24:00.268 "cpumask": "1" 00:24:00.268 } 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "method": "ublk_start_disk", 00:24:00.268 "params": { 00:24:00.268 "bdev_name": "malloc0", 00:24:00.268 "ublk_id": 0, 00:24:00.268 "num_queues": 1, 00:24:00.268 "queue_depth": 128 00:24:00.268 } 00:24:00.268 } 00:24:00.268 ] 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "subsystem": "nbd", 00:24:00.268 "config": [] 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "subsystem": "nvmf", 00:24:00.268 "config": [ 00:24:00.268 { 00:24:00.268 "method": "nvmf_set_config", 00:24:00.268 "params": { 00:24:00.268 "discovery_filter": "match_any", 00:24:00.268 "admin_cmd_passthru": { 00:24:00.268 "identify_ctrlr": false 00:24:00.268 }, 00:24:00.268 "dhchap_digests": [ 00:24:00.268 "sha256", 00:24:00.268 "sha384", 00:24:00.268 "sha512" 00:24:00.268 ], 00:24:00.268 "dhchap_dhgroups": [ 00:24:00.268 "null", 00:24:00.268 "ffdhe2048", 00:24:00.268 "ffdhe3072", 00:24:00.268 "ffdhe4096", 00:24:00.268 "ffdhe6144", 00:24:00.268 "ffdhe8192" 00:24:00.268 ] 00:24:00.268 } 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "method": "nvmf_set_max_subsystems", 00:24:00.268 "params": { 00:24:00.268 "max_subsystems": 1024 00:24:00.268 } 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "method": "nvmf_set_crdt", 00:24:00.268 "params": { 00:24:00.268 "crdt1": 0, 00:24:00.268 "crdt2": 0, 00:24:00.268 "crdt3": 0 00:24:00.268 } 00:24:00.268 } 00:24:00.268 ] 00:24:00.268 }, 00:24:00.268 { 00:24:00.268 "subsystem": "iscsi", 00:24:00.268 "config": [ 00:24:00.268 { 00:24:00.268 "method": "iscsi_set_options", 00:24:00.268 "params": { 00:24:00.268 "node_base": "iqn.2016-06.io.spdk", 00:24:00.268 "max_sessions": 128, 00:24:00.268 "max_connections_per_session": 2, 00:24:00.268 "max_queue_depth": 64, 00:24:00.268 "default_time2wait": 2, 00:24:00.268 "default_time2retain": 20, 00:24:00.268 "first_burst_length": 8192, 00:24:00.268 "immediate_data": true, 00:24:00.268 "allow_duplicated_isid": false, 00:24:00.268 "error_recovery_level": 0, 00:24:00.268 "nop_timeout": 60, 00:24:00.268 "nop_in_interval": 30, 00:24:00.268 "disable_chap": false, 00:24:00.268 "require_chap": false, 00:24:00.268 "mutual_chap": false, 00:24:00.268 "chap_group": 0, 00:24:00.268 "max_large_datain_per_connection": 64, 00:24:00.268 "max_r2t_per_connection": 4, 00:24:00.268 "pdu_pool_size": 36864, 00:24:00.268 "immediate_data_pool_size": 16384, 00:24:00.268 "data_out_pool_size": 2048 00:24:00.268 } 00:24:00.268 } 00:24:00.268 ] 00:24:00.268 } 00:24:00.268 ] 00:24:00.268 }' 00:24:00.268 13:43:12 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75430 00:24:00.268 13:43:12 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75430 ']' 00:24:00.268 13:43:12 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75430 00:24:00.268 13:43:12 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:24:00.268 13:43:12 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.268 13:43:12 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75430 00:24:00.269 13:43:12 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:00.269 killing process with pid 75430 00:24:00.269 13:43:12 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:00.269 13:43:12 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75430' 00:24:00.269 13:43:12 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75430 00:24:00.269 13:43:12 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75430 00:24:02.171 [2024-11-20 13:43:13.635443] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:02.171 [2024-11-20 13:43:13.672681] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:02.171 [2024-11-20 13:43:13.672887] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:02.171 [2024-11-20 13:43:13.681674] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:02.171 [2024-11-20 13:43:13.681768] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:02.171 [2024-11-20 13:43:13.681791] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:02.171 [2024-11-20 13:43:13.681828] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:02.171 [2024-11-20 13:43:13.682012] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:04.134 13:43:15 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:24:04.134 13:43:15 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75506 00:24:04.134 13:43:15 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75506 00:24:04.134 13:43:15 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75506 ']' 00:24:04.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.134 13:43:15 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.134 13:43:15 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.134 13:43:15 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.134 13:43:15 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.134 13:43:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:04.134 13:43:15 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:24:04.134 "subsystems": [ 00:24:04.134 { 00:24:04.134 "subsystem": "fsdev", 00:24:04.134 "config": [ 00:24:04.134 { 00:24:04.134 "method": "fsdev_set_opts", 00:24:04.134 "params": { 00:24:04.134 "fsdev_io_pool_size": 65535, 00:24:04.134 "fsdev_io_cache_size": 256 00:24:04.134 } 00:24:04.134 } 00:24:04.134 ] 00:24:04.134 }, 00:24:04.134 { 00:24:04.134 "subsystem": "keyring", 00:24:04.134 "config": [] 00:24:04.134 }, 00:24:04.134 { 00:24:04.134 "subsystem": "iobuf", 00:24:04.134 "config": [ 00:24:04.134 { 00:24:04.134 "method": "iobuf_set_options", 00:24:04.134 "params": { 00:24:04.134 "small_pool_count": 8192, 00:24:04.134 "large_pool_count": 1024, 00:24:04.134 "small_bufsize": 8192, 00:24:04.134 "large_bufsize": 135168, 00:24:04.134 "enable_numa": false 00:24:04.134 } 00:24:04.134 } 00:24:04.134 ] 00:24:04.134 }, 00:24:04.134 { 00:24:04.134 "subsystem": "sock", 00:24:04.134 "config": [ 00:24:04.134 { 00:24:04.134 "method": "sock_set_default_impl", 00:24:04.134 "params": { 00:24:04.134 "impl_name": "posix" 00:24:04.134 } 00:24:04.134 }, 00:24:04.134 { 00:24:04.134 "method": "sock_impl_set_options", 00:24:04.134 "params": { 00:24:04.134 "impl_name": "ssl", 00:24:04.134 "recv_buf_size": 4096, 00:24:04.134 "send_buf_size": 4096, 00:24:04.134 "enable_recv_pipe": true, 00:24:04.134 "enable_quickack": false, 00:24:04.134 "enable_placement_id": 0, 00:24:04.134 "enable_zerocopy_send_server": true, 00:24:04.134 "enable_zerocopy_send_client": false, 00:24:04.134 "zerocopy_threshold": 0, 00:24:04.134 "tls_version": 0, 00:24:04.134 "enable_ktls": false 00:24:04.134 } 00:24:04.134 }, 00:24:04.134 { 00:24:04.134 "method": "sock_impl_set_options", 00:24:04.134 "params": { 00:24:04.134 "impl_name": "posix", 00:24:04.134 "recv_buf_size": 2097152, 00:24:04.134 "send_buf_size": 2097152, 00:24:04.134 "enable_recv_pipe": true, 00:24:04.134 "enable_quickack": false, 00:24:04.134 "enable_placement_id": 0, 00:24:04.134 "enable_zerocopy_send_server": true, 00:24:04.134 "enable_zerocopy_send_client": false, 00:24:04.134 "zerocopy_threshold": 0, 00:24:04.134 "tls_version": 0, 00:24:04.134 "enable_ktls": false 00:24:04.134 } 00:24:04.134 } 00:24:04.134 ] 00:24:04.134 }, 00:24:04.134 { 00:24:04.134 "subsystem": "vmd", 00:24:04.134 "config": [] 00:24:04.134 }, 00:24:04.134 { 00:24:04.134 "subsystem": "accel", 00:24:04.134 "config": [ 00:24:04.134 { 00:24:04.134 "method": "accel_set_options", 00:24:04.134 "params": { 00:24:04.134 "small_cache_size": 128, 00:24:04.134 "large_cache_size": 16, 00:24:04.134 "task_count": 2048, 00:24:04.134 "sequence_count": 2048, 00:24:04.134 "buf_count": 2048 00:24:04.134 } 00:24:04.134 } 00:24:04.134 ] 00:24:04.134 }, 00:24:04.134 { 00:24:04.134 "subsystem": "bdev", 00:24:04.134 "config": [ 00:24:04.134 { 00:24:04.134 "method": "bdev_set_options", 00:24:04.134 "params": { 00:24:04.134 "bdev_io_pool_size": 65535, 00:24:04.134 "bdev_io_cache_size": 256, 00:24:04.134 "bdev_auto_examine": true, 00:24:04.134 "iobuf_small_cache_size": 128, 00:24:04.134 "iobuf_large_cache_size": 16 00:24:04.134 } 00:24:04.134 }, 00:24:04.134 { 00:24:04.134 "method": "bdev_raid_set_options", 00:24:04.134 "params": { 00:24:04.134 "process_window_size_kb": 1024, 00:24:04.134 "process_max_bandwidth_mb_sec": 0 00:24:04.134 } 00:24:04.134 }, 00:24:04.134 { 00:24:04.134 "method": "bdev_iscsi_set_options", 00:24:04.134 "params": { 00:24:04.134 "timeout_sec": 30 00:24:04.134 } 00:24:04.134 }, 00:24:04.134 { 00:24:04.134 "method": "bdev_nvme_set_options", 00:24:04.134 "params": { 00:24:04.134 "action_on_timeout": "none", 00:24:04.134 "timeout_us": 0, 00:24:04.134 "timeout_admin_us": 0, 00:24:04.134 "keep_alive_timeout_ms": 10000, 00:24:04.134 "arbitration_burst": 0, 00:24:04.134 "low_priority_weight": 0, 00:24:04.134 "medium_priority_weight": 0, 00:24:04.134 "high_priority_weight": 0, 00:24:04.134 "nvme_adminq_poll_period_us": 10000, 00:24:04.135 "nvme_ioq_poll_period_us": 0, 00:24:04.135 "io_queue_requests": 0, 00:24:04.135 "delay_cmd_submit": true, 00:24:04.135 "transport_retry_count": 4, 00:24:04.135 "bdev_retry_count": 3, 00:24:04.135 "transport_ack_timeout": 0, 00:24:04.135 "ctrlr_loss_timeout_sec": 0, 00:24:04.135 "reconnect_delay_sec": 0, 00:24:04.135 "fast_io_fail_timeout_sec": 0, 00:24:04.135 "disable_auto_failback": false, 00:24:04.135 "generate_uuids": false, 00:24:04.135 "transport_tos": 0, 00:24:04.135 "nvme_error_stat": false, 00:24:04.135 "rdma_srq_size": 0, 00:24:04.135 "io_path_stat": false, 00:24:04.135 "allow_accel_sequence": false, 00:24:04.135 "rdma_max_cq_size": 0, 00:24:04.135 "rdma_cm_event_timeout_ms": 0, 00:24:04.135 "dhchap_digests": [ 00:24:04.135 "sha256", 00:24:04.135 "sha384", 00:24:04.135 "sha512" 00:24:04.135 ], 00:24:04.135 "dhchap_dhgroups": [ 00:24:04.135 "null", 00:24:04.135 "ffdhe2048", 00:24:04.135 "ffdhe3072", 00:24:04.135 "ffdhe4096", 00:24:04.135 "ffdhe6144", 00:24:04.135 "ffdhe8192" 00:24:04.135 ] 00:24:04.135 } 00:24:04.135 }, 00:24:04.135 { 00:24:04.135 "method": "bdev_nvme_set_hotplug", 00:24:04.135 "params": { 00:24:04.135 "period_us": 100000, 00:24:04.135 "enable": false 00:24:04.135 } 00:24:04.135 }, 00:24:04.135 { 00:24:04.135 "method": "bdev_malloc_create", 00:24:04.135 "params": { 00:24:04.135 "name": "malloc0", 00:24:04.135 "num_blocks": 8192, 00:24:04.135 "block_size": 4096, 00:24:04.135 "physical_block_size": 4096, 00:24:04.135 "uuid": "0639d1bb-7893-432b-a309-9e4be00cf433", 00:24:04.135 "optimal_io_boundary": 0, 00:24:04.135 "md_size": 0, 00:24:04.135 "dif_type": 0, 00:24:04.135 "dif_is_head_of_md": false, 00:24:04.135 "dif_pi_format": 0 00:24:04.135 } 00:24:04.135 }, 00:24:04.135 { 00:24:04.135 "method": "bdev_wait_for_examine" 00:24:04.135 } 00:24:04.135 ] 00:24:04.135 }, 00:24:04.135 { 00:24:04.135 "subsystem": "scsi", 00:24:04.135 "config": null 00:24:04.135 }, 00:24:04.135 { 00:24:04.135 "subsystem": "scheduler", 00:24:04.135 "config": [ 00:24:04.135 { 00:24:04.135 "method": "framework_set_scheduler", 00:24:04.135 "params": { 00:24:04.135 "name": "static" 00:24:04.135 } 00:24:04.135 } 00:24:04.135 ] 00:24:04.135 }, 00:24:04.135 { 00:24:04.135 "subsystem": "vhost_scsi", 00:24:04.135 "config": [] 00:24:04.135 }, 00:24:04.135 { 00:24:04.135 "subsystem": "vhost_blk", 00:24:04.135 "config": [] 00:24:04.135 }, 00:24:04.135 { 00:24:04.135 "subsystem": "ublk", 00:24:04.135 "config": [ 00:24:04.135 { 00:24:04.135 "method": "ublk_create_target", 00:24:04.135 "params": { 00:24:04.135 "cpumask": "1" 00:24:04.135 } 00:24:04.135 }, 00:24:04.135 { 00:24:04.135 "method": "ublk_start_disk", 00:24:04.135 "params": { 00:24:04.135 "bdev_name": "malloc0", 00:24:04.135 "ublk_id": 0, 00:24:04.135 "num_queues": 1, 00:24:04.135 "queue_depth": 128 00:24:04.135 } 00:24:04.135 } 00:24:04.135 ] 00:24:04.135 }, 00:24:04.135 { 00:24:04.135 "subsystem": "nbd", 00:24:04.135 "config": [] 00:24:04.135 }, 00:24:04.135 { 00:24:04.135 "subsystem": "nvmf", 00:24:04.135 "config": [ 00:24:04.135 { 00:24:04.135 "method": "nvmf_set_config", 00:24:04.135 "params": { 00:24:04.135 "discovery_filter": "match_any", 00:24:04.135 "admin_cmd_passthru": { 00:24:04.135 "identify_ctrlr": false 00:24:04.135 }, 00:24:04.135 "dhchap_digests": [ 00:24:04.135 "sha256", 00:24:04.135 "sha384", 00:24:04.135 "sha512" 00:24:04.135 ], 00:24:04.135 "dhchap_dhgroups": [ 00:24:04.135 "null", 00:24:04.135 "ffdhe2048", 00:24:04.135 "ffdhe3072", 00:24:04.135 "ffdhe4096", 00:24:04.135 "ffdhe6144", 00:24:04.135 "ffdhe8192" 00:24:04.135 ] 00:24:04.135 } 00:24:04.135 }, 00:24:04.135 { 00:24:04.135 "method": "nvmf_set_max_subsystems", 00:24:04.135 "params": { 00:24:04.135 "max_subsystems": 1024 00:24:04.135 } 00:24:04.135 }, 00:24:04.135 { 00:24:04.135 "method": "nvmf_set_crdt", 00:24:04.135 "params": { 00:24:04.135 "crdt1": 0, 00:24:04.135 "crdt2": 0, 00:24:04.135 "crdt3": 0 00:24:04.135 } 00:24:04.135 } 00:24:04.135 ] 00:24:04.135 }, 00:24:04.135 { 00:24:04.135 "subsystem": "iscsi", 00:24:04.135 "config": [ 00:24:04.135 { 00:24:04.135 "method": "iscsi_set_options", 00:24:04.135 "params": { 00:24:04.135 "node_base": "iqn.2016-06.io.spdk", 00:24:04.135 "max_sessions": 128, 00:24:04.135 "max_connections_per_session": 2, 00:24:04.135 "max_queue_depth": 64, 00:24:04.135 "default_time2wait": 2, 00:24:04.135 "default_time2retain": 20, 00:24:04.135 "first_burst_length": 8192, 00:24:04.135 "immediate_data": true, 00:24:04.135 "allow_duplicated_isid": false, 00:24:04.135 "error_recovery_level": 0, 00:24:04.135 "nop_timeout": 60, 00:24:04.135 "nop_in_interval": 30, 00:24:04.135 "disable_chap": false, 00:24:04.135 "require_chap": false, 00:24:04.135 "mutual_chap": false, 00:24:04.135 "chap_group": 0, 00:24:04.135 "max_large_datain_per_connection": 64, 00:24:04.135 "max_r2t_per_connection": 4, 00:24:04.135 "pdu_pool_size": 36864, 00:24:04.135 "immediate_data_pool_size": 16384, 00:24:04.135 "data_out_pool_size": 2048 00:24:04.135 } 00:24:04.135 } 00:24:04.135 ] 00:24:04.135 } 00:24:04.135 ] 00:24:04.135 }' 00:24:04.135 [2024-11-20 13:43:15.820616] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:24:04.135 [2024-11-20 13:43:15.820767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75506 ] 00:24:04.135 [2024-11-20 13:43:16.004437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.394 [2024-11-20 13:43:16.153974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.769 [2024-11-20 13:43:17.368620] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:05.769 [2024-11-20 13:43:17.369935] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:05.769 [2024-11-20 13:43:17.375788] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:24:05.769 [2024-11-20 13:43:17.375911] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:24:05.769 [2024-11-20 13:43:17.375926] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:24:05.769 [2024-11-20 13:43:17.375935] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:24:05.769 [2024-11-20 13:43:17.383668] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:05.769 [2024-11-20 13:43:17.383699] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:05.769 [2024-11-20 13:43:17.391658] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:05.769 [2024-11-20 13:43:17.391784] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:24:05.769 [2024-11-20 13:43:17.415638] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75506 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75506 ']' 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75506 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75506 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:05.769 killing process with pid 75506 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75506' 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75506 00:24:05.769 13:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75506 00:24:07.673 [2024-11-20 13:43:19.230610] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:07.673 [2024-11-20 13:43:19.270651] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:07.673 [2024-11-20 13:43:19.270817] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:07.674 [2024-11-20 13:43:19.278645] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:07.674 [2024-11-20 13:43:19.278728] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:07.674 [2024-11-20 13:43:19.278740] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:07.674 [2024-11-20 13:43:19.278772] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:07.674 [2024-11-20 13:43:19.278960] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:10.210 13:43:21 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:24:10.210 ************************************ 00:24:10.210 END TEST test_save_ublk_config 00:24:10.210 ************************************ 00:24:10.210 00:24:10.210 real 0m11.748s 00:24:10.210 user 0m8.678s 00:24:10.210 sys 0m3.916s 00:24:10.210 13:43:21 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.210 13:43:21 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:10.210 13:43:21 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75603 00:24:10.210 13:43:21 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:24:10.210 13:43:21 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:10.210 13:43:21 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75603 00:24:10.210 13:43:21 ublk -- common/autotest_common.sh@835 -- # '[' -z 75603 ']' 00:24:10.210 13:43:21 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.210 13:43:21 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.210 13:43:21 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.210 13:43:21 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.210 13:43:21 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:10.210 [2024-11-20 13:43:21.954591] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:24:10.210 [2024-11-20 13:43:21.954961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75603 ] 00:24:10.210 [2024-11-20 13:43:22.141131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:10.468 [2024-11-20 13:43:22.285687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.468 [2024-11-20 13:43:22.285734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.407 13:43:23 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.407 13:43:23 ublk -- common/autotest_common.sh@868 -- # return 0 00:24:11.407 13:43:23 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:24:11.407 13:43:23 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:11.407 13:43:23 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:11.407 13:43:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:11.407 ************************************ 00:24:11.407 START TEST test_create_ublk 00:24:11.407 ************************************ 00:24:11.408 13:43:23 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:24:11.408 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:24:11.408 13:43:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.408 13:43:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:11.408 [2024-11-20 13:43:23.327634] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:11.408 [2024-11-20 13:43:23.330792] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:11.408 13:43:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.408 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:24:11.408 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:24:11.408 13:43:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.408 13:43:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:11.982 13:43:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.982 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:24:11.982 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:24:11.982 13:43:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.982 13:43:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:11.982 [2024-11-20 13:43:23.647872] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:24:11.982 [2024-11-20 13:43:23.648407] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:24:11.982 [2024-11-20 13:43:23.648431] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:24:11.982 [2024-11-20 13:43:23.648442] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:24:11.982 [2024-11-20 13:43:23.655683] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:11.982 [2024-11-20 13:43:23.655717] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:11.982 [2024-11-20 13:43:23.662672] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:11.982 [2024-11-20 13:43:23.663382] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:24:11.982 [2024-11-20 13:43:23.693716] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:24:11.982 13:43:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.982 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:24:11.982 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:24:11.982 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:24:11.982 13:43:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.982 13:43:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:11.982 13:43:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.982 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:24:11.982 { 00:24:11.982 "ublk_device": "/dev/ublkb0", 00:24:11.982 "id": 0, 00:24:11.982 "queue_depth": 512, 00:24:11.982 "num_queues": 4, 00:24:11.982 "bdev_name": "Malloc0" 00:24:11.982 } 00:24:11.982 ]' 00:24:11.982 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:24:11.982 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:24:11.982 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:24:11.983 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:24:11.983 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:24:11.983 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:24:11.983 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:24:11.983 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:24:11.983 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:24:12.242 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:24:12.242 13:43:23 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:24:12.242 13:43:23 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:24:12.242 13:43:23 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:24:12.242 13:43:23 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:24:12.242 13:43:23 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:24:12.242 13:43:23 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:24:12.242 13:43:23 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:24:12.242 13:43:23 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:24:12.242 13:43:23 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:24:12.242 13:43:23 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:24:12.242 13:43:23 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:24:12.242 13:43:23 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:24:12.242 fio: verification read phase will never start because write phase uses all of runtime 00:24:12.242 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:24:12.242 fio-3.35 00:24:12.242 Starting 1 process 00:24:24.453 00:24:24.453 fio_test: (groupid=0, jobs=1): err= 0: pid=75661: Wed Nov 20 13:43:34 2024 00:24:24.453 write: IOPS=6344, BW=24.8MiB/s (26.0MB/s)(248MiB/10001msec); 0 zone resets 00:24:24.453 clat (usec): min=64, max=7451, avg=156.56, stdev=171.88 00:24:24.453 lat (usec): min=64, max=7481, avg=157.17, stdev=171.95 00:24:24.453 clat percentiles (usec): 00:24:24.453 | 1.00th=[ 71], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 82], 00:24:24.453 | 30.00th=[ 139], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 172], 00:24:24.453 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 202], 00:24:24.453 | 99.00th=[ 237], 99.50th=[ 265], 99.90th=[ 3589], 99.95th=[ 3785], 00:24:24.453 | 99.99th=[ 4047] 00:24:24.453 bw ( KiB/s): min= 9840, max=46840, per=100.00%, avg=25608.42, stdev=9542.16, samples=19 00:24:24.453 iops : min= 2460, max=11710, avg=6402.11, stdev=2385.54, samples=19 00:24:24.453 lat (usec) : 100=27.21%, 250=72.14%, 500=0.27%, 750=0.03%, 1000=0.03% 00:24:24.453 lat (msec) : 2=0.08%, 4=0.23%, 10=0.01% 00:24:24.453 cpu : usr=1.42%, sys=4.95%, ctx=63453, majf=0, minf=797 00:24:24.453 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:24.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.453 issued rwts: total=0,63451,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:24.453 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:24.453 00:24:24.453 Run status group 0 (all jobs): 00:24:24.453 WRITE: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=248MiB (260MB), run=10001-10001msec 00:24:24.453 00:24:24.453 Disk stats (read/write): 00:24:24.453 ublkb0: ios=0/62894, merge=0/0, ticks=0/9249, in_queue=9250, util=99.12% 00:24:24.453 13:43:34 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.453 [2024-11-20 13:43:34.211080] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:24.453 [2024-11-20 13:43:34.258773] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:24.453 [2024-11-20 13:43:34.260340] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:24.453 [2024-11-20 13:43:34.266737] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:24.453 [2024-11-20 13:43:34.267186] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:24.453 [2024-11-20 13:43:34.267204] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.453 13:43:34 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.453 [2024-11-20 13:43:34.290784] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:24:24.453 request: 00:24:24.453 { 00:24:24.453 "ublk_id": 0, 00:24:24.453 "method": "ublk_stop_disk", 00:24:24.453 "req_id": 1 00:24:24.453 } 00:24:24.453 Got JSON-RPC error response 00:24:24.453 response: 00:24:24.453 { 00:24:24.453 "code": -19, 00:24:24.453 "message": "No such device" 00:24:24.453 } 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:24.453 13:43:34 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.453 [2024-11-20 13:43:34.314774] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:24.453 [2024-11-20 13:43:34.323609] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:24.453 [2024-11-20 13:43:34.323677] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:24:24.453 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.453 13:43:34 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:24.454 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.454 13:43:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.454 13:43:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.454 13:43:35 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:24:24.454 13:43:35 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:24:24.454 13:43:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.454 13:43:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.454 13:43:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.454 13:43:35 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:24:24.454 13:43:35 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:24:24.454 13:43:35 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:24:24.454 13:43:35 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:24.454 13:43:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.454 13:43:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.454 13:43:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.454 13:43:35 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:24:24.454 13:43:35 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:24:24.454 ************************************ 00:24:24.454 END TEST test_create_ublk 00:24:24.454 ************************************ 00:24:24.454 13:43:35 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:24:24.454 00:24:24.454 real 0m11.881s 00:24:24.454 user 0m0.546s 00:24:24.454 sys 0m0.634s 00:24:24.454 13:43:35 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:24.454 13:43:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.454 13:43:35 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:24:24.454 13:43:35 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:24.454 13:43:35 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:24.454 13:43:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.454 ************************************ 00:24:24.454 START TEST test_create_multi_ublk 00:24:24.454 ************************************ 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.454 [2024-11-20 13:43:35.275640] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:24.454 [2024-11-20 13:43:35.278560] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.454 [2024-11-20 13:43:35.568868] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:24:24.454 [2024-11-20 13:43:35.569405] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:24:24.454 [2024-11-20 13:43:35.569422] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:24:24.454 [2024-11-20 13:43:35.569439] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:24:24.454 [2024-11-20 13:43:35.576695] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:24.454 [2024-11-20 13:43:35.576735] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:24.454 [2024-11-20 13:43:35.584728] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:24.454 [2024-11-20 13:43:35.585525] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:24:24.454 [2024-11-20 13:43:35.604664] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.454 [2024-11-20 13:43:35.923941] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:24:24.454 [2024-11-20 13:43:35.924482] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:24:24.454 [2024-11-20 13:43:35.924510] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:24:24.454 [2024-11-20 13:43:35.924520] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:24:24.454 [2024-11-20 13:43:35.931722] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:24.454 [2024-11-20 13:43:35.931753] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:24.454 [2024-11-20 13:43:35.939704] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:24.454 [2024-11-20 13:43:35.940431] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:24:24.454 [2024-11-20 13:43:35.956652] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.454 13:43:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.454 13:43:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.454 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:24:24.454 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:24:24.454 13:43:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.454 13:43:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.454 [2024-11-20 13:43:36.263806] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:24:24.454 [2024-11-20 13:43:36.264323] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:24:24.454 [2024-11-20 13:43:36.264343] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:24:24.454 [2024-11-20 13:43:36.264355] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:24:24.454 [2024-11-20 13:43:36.271669] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:24.454 [2024-11-20 13:43:36.271706] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:24.454 [2024-11-20 13:43:36.279717] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:24.454 [2024-11-20 13:43:36.280483] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:24:24.454 [2024-11-20 13:43:36.288756] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:24:24.454 13:43:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.454 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:24:24.454 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:24.454 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:24:24.454 13:43:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.454 13:43:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.713 13:43:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.713 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:24:24.713 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:24:24.713 13:43:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.713 13:43:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.713 [2024-11-20 13:43:36.612874] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:24:24.713 [2024-11-20 13:43:36.613383] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:24:24.713 [2024-11-20 13:43:36.613403] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:24:24.713 [2024-11-20 13:43:36.613412] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:24:24.713 [2024-11-20 13:43:36.620737] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:24.713 [2024-11-20 13:43:36.620777] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:24.713 [2024-11-20 13:43:36.628670] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:24.713 [2024-11-20 13:43:36.629399] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:24:24.713 [2024-11-20 13:43:36.636794] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:24:24.713 13:43:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.713 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:24:24.713 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:24:24.713 13:43:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.713 13:43:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:24.971 13:43:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.971 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:24:24.971 { 00:24:24.971 "ublk_device": "/dev/ublkb0", 00:24:24.971 "id": 0, 00:24:24.971 "queue_depth": 512, 00:24:24.971 "num_queues": 4, 00:24:24.971 "bdev_name": "Malloc0" 00:24:24.971 }, 00:24:24.971 { 00:24:24.971 "ublk_device": "/dev/ublkb1", 00:24:24.971 "id": 1, 00:24:24.971 "queue_depth": 512, 00:24:24.971 "num_queues": 4, 00:24:24.971 "bdev_name": "Malloc1" 00:24:24.971 }, 00:24:24.971 { 00:24:24.971 "ublk_device": "/dev/ublkb2", 00:24:24.971 "id": 2, 00:24:24.971 "queue_depth": 512, 00:24:24.971 "num_queues": 4, 00:24:24.971 "bdev_name": "Malloc2" 00:24:24.971 }, 00:24:24.971 { 00:24:24.971 "ublk_device": "/dev/ublkb3", 00:24:24.971 "id": 3, 00:24:24.971 "queue_depth": 512, 00:24:24.971 "num_queues": 4, 00:24:24.971 "bdev_name": "Malloc3" 00:24:24.971 } 00:24:24.971 ]' 00:24:24.971 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:24:24.971 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:24.971 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:24:24.971 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:24:24.971 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:24:24.971 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:24:24.971 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:24:24.971 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:24.971 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:24:24.971 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:24.971 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:24:24.971 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:24:24.971 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:24.971 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:24:25.231 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:24:25.231 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:24:25.231 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:24:25.231 13:43:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:24:25.231 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:25.231 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:24:25.231 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:25.231 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:24:25.231 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:24:25.231 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:25.231 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:24:25.231 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:24:25.231 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:24:25.490 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:24:25.490 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:24:25.490 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:25.490 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:24:25.490 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:25.490 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:24:25.490 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:24:25.490 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:25.490 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:24:25.490 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:24:25.490 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:24:25.490 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:24:25.490 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:25.748 [2024-11-20 13:43:37.557903] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:25.748 [2024-11-20 13:43:37.590254] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:25.748 [2024-11-20 13:43:37.592071] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:25.748 [2024-11-20 13:43:37.596715] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:25.748 [2024-11-20 13:43:37.597154] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:25.748 [2024-11-20 13:43:37.597175] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:25.748 [2024-11-20 13:43:37.615814] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:24:25.748 [2024-11-20 13:43:37.660707] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:25.748 [2024-11-20 13:43:37.662460] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:24:25.748 [2024-11-20 13:43:37.667653] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:25.748 [2024-11-20 13:43:37.668107] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:24:25.748 [2024-11-20 13:43:37.668128] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.748 13:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:25.748 [2024-11-20 13:43:37.675854] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:24:26.007 [2024-11-20 13:43:37.714388] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:26.007 [2024-11-20 13:43:37.715984] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:24:26.007 [2024-11-20 13:43:37.719715] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:26.007 [2024-11-20 13:43:37.720110] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:24:26.007 [2024-11-20 13:43:37.720126] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:24:26.007 13:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.007 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:26.007 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:24:26.007 13:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.007 13:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:26.007 [2024-11-20 13:43:37.734790] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:24:26.007 [2024-11-20 13:43:37.770423] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:26.007 [2024-11-20 13:43:37.771304] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:24:26.007 [2024-11-20 13:43:37.775728] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:26.007 [2024-11-20 13:43:37.776162] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:24:26.007 [2024-11-20 13:43:37.776185] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:24:26.007 13:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.007 13:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:24:26.266 [2024-11-20 13:43:38.011792] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:26.266 [2024-11-20 13:43:38.020690] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:26.266 [2024-11-20 13:43:38.020788] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:24:26.266 13:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:24:26.266 13:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:26.266 13:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:26.266 13:43:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.266 13:43:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:26.849 13:43:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.849 13:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:26.849 13:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:26.849 13:43:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.849 13:43:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:27.415 13:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.415 13:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:27.415 13:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:24:27.415 13:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.415 13:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:27.674 13:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.674 13:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:27.674 13:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:24:27.674 13:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.674 13:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:28.242 13:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.242 13:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:24:28.242 13:43:39 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:24:28.242 13:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.242 13:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:28.242 13:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.242 13:43:39 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:24:28.242 13:43:39 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:24:28.242 13:43:40 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:24:28.242 13:43:40 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:28.242 13:43:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.242 13:43:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:28.242 13:43:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.242 13:43:40 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:24:28.242 13:43:40 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:24:28.242 ************************************ 00:24:28.242 END TEST test_create_multi_ublk 00:24:28.242 ************************************ 00:24:28.242 13:43:40 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:24:28.242 00:24:28.242 real 0m4.811s 00:24:28.242 user 0m1.116s 00:24:28.242 sys 0m0.204s 00:24:28.242 13:43:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.242 13:43:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:28.242 13:43:40 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:28.242 13:43:40 ublk -- ublk/ublk.sh@147 -- # cleanup 00:24:28.242 13:43:40 ublk -- ublk/ublk.sh@130 -- # killprocess 75603 00:24:28.242 13:43:40 ublk -- common/autotest_common.sh@954 -- # '[' -z 75603 ']' 00:24:28.242 13:43:40 ublk -- common/autotest_common.sh@958 -- # kill -0 75603 00:24:28.242 13:43:40 ublk -- common/autotest_common.sh@959 -- # uname 00:24:28.242 13:43:40 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.242 13:43:40 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75603 00:24:28.242 killing process with pid 75603 00:24:28.242 13:43:40 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:28.242 13:43:40 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:28.242 13:43:40 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75603' 00:24:28.242 13:43:40 ublk -- common/autotest_common.sh@973 -- # kill 75603 00:24:28.242 13:43:40 ublk -- common/autotest_common.sh@978 -- # wait 75603 00:24:29.619 [2024-11-20 13:43:41.397648] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:29.620 [2024-11-20 13:43:41.397740] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:30.998 00:24:30.998 real 0m32.997s 00:24:30.998 user 0m46.930s 00:24:30.998 sys 0m9.946s 00:24:30.998 13:43:42 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:30.998 ************************************ 00:24:30.998 END TEST ublk 00:24:30.998 ************************************ 00:24:30.998 13:43:42 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:30.998 13:43:42 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:24:30.998 13:43:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:30.998 13:43:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:30.998 13:43:42 -- common/autotest_common.sh@10 -- # set +x 00:24:30.998 ************************************ 00:24:30.998 START TEST ublk_recovery 00:24:30.998 ************************************ 00:24:30.998 13:43:42 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:24:30.998 * Looking for test storage... 00:24:30.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:24:30.998 13:43:42 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:30.998 13:43:42 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:24:30.998 13:43:42 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:31.328 13:43:42 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:24:31.328 13:43:42 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.329 13:43:42 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:24:31.329 13:43:42 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.329 13:43:42 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.329 13:43:42 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.329 13:43:42 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:24:31.329 13:43:42 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.329 13:43:42 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:31.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.329 --rc genhtml_branch_coverage=1 00:24:31.329 --rc genhtml_function_coverage=1 00:24:31.329 --rc genhtml_legend=1 00:24:31.329 --rc geninfo_all_blocks=1 00:24:31.329 --rc geninfo_unexecuted_blocks=1 00:24:31.329 00:24:31.329 ' 00:24:31.329 13:43:42 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:31.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.329 --rc genhtml_branch_coverage=1 00:24:31.329 --rc genhtml_function_coverage=1 00:24:31.329 --rc genhtml_legend=1 00:24:31.329 --rc geninfo_all_blocks=1 00:24:31.329 --rc geninfo_unexecuted_blocks=1 00:24:31.329 00:24:31.329 ' 00:24:31.329 13:43:42 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:31.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.329 --rc genhtml_branch_coverage=1 00:24:31.329 --rc genhtml_function_coverage=1 00:24:31.329 --rc genhtml_legend=1 00:24:31.329 --rc geninfo_all_blocks=1 00:24:31.329 --rc geninfo_unexecuted_blocks=1 00:24:31.329 00:24:31.329 ' 00:24:31.329 13:43:42 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:31.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.329 --rc genhtml_branch_coverage=1 00:24:31.329 --rc genhtml_function_coverage=1 00:24:31.329 --rc genhtml_legend=1 00:24:31.329 --rc geninfo_all_blocks=1 00:24:31.329 --rc geninfo_unexecuted_blocks=1 00:24:31.329 00:24:31.329 ' 00:24:31.329 13:43:42 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:24:31.329 13:43:42 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:24:31.329 13:43:42 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:24:31.329 13:43:42 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:24:31.329 13:43:42 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:24:31.329 13:43:43 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:24:31.329 13:43:43 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:24:31.329 13:43:43 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:24:31.329 13:43:43 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:24:31.329 13:43:43 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:24:31.329 13:43:43 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76035 00:24:31.329 13:43:43 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:24:31.329 13:43:43 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:31.329 13:43:43 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76035 00:24:31.329 13:43:43 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76035 ']' 00:24:31.329 13:43:43 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.329 13:43:43 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.329 13:43:43 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.329 13:43:43 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.329 13:43:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.329 [2024-11-20 13:43:43.137633] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:24:31.329 [2024-11-20 13:43:43.137783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76035 ] 00:24:31.587 [2024-11-20 13:43:43.322583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:31.587 [2024-11-20 13:43:43.450027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.587 [2024-11-20 13:43:43.450061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.519 13:43:44 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.519 13:43:44 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:24:32.519 13:43:44 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:24:32.519 13:43:44 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.519 13:43:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.519 [2024-11-20 13:43:44.376635] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:32.519 [2024-11-20 13:43:44.379797] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:32.519 13:43:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.519 13:43:44 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:24:32.519 13:43:44 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.519 13:43:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.777 malloc0 00:24:32.777 13:43:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.777 13:43:44 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:24:32.777 13:43:44 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.777 13:43:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.777 [2024-11-20 13:43:44.544869] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:24:32.777 [2024-11-20 13:43:44.545047] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:24:32.777 [2024-11-20 13:43:44.545064] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:24:32.777 [2024-11-20 13:43:44.545079] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:24:32.777 [2024-11-20 13:43:44.553812] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:32.777 [2024-11-20 13:43:44.553857] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:32.777 [2024-11-20 13:43:44.560695] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:32.777 [2024-11-20 13:43:44.560897] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:24:32.777 [2024-11-20 13:43:44.583679] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:24:32.777 1 00:24:32.777 13:43:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.777 13:43:44 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:24:33.711 13:43:45 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76071 00:24:33.711 13:43:45 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:24:33.711 13:43:45 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:24:33.970 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:33.970 fio-3.35 00:24:33.970 Starting 1 process 00:24:39.243 13:43:50 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76035 00:24:39.243 13:43:50 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:24:44.518 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76035 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:24:44.518 13:43:55 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76181 00:24:44.518 13:43:55 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:44.518 13:43:55 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76181 00:24:44.518 13:43:55 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76181 ']' 00:24:44.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.518 13:43:55 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.518 13:43:55 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:24:44.518 13:43:55 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.518 13:43:55 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.518 13:43:55 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.518 13:43:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.518 [2024-11-20 13:43:55.727689] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:24:44.518 [2024-11-20 13:43:55.727837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76181 ] 00:24:44.518 [2024-11-20 13:43:55.913495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:44.518 [2024-11-20 13:43:56.037688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.518 [2024-11-20 13:43:56.037726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.086 13:43:56 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.086 13:43:56 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:24:45.086 13:43:56 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:24:45.086 13:43:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.086 13:43:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.086 [2024-11-20 13:43:56.959641] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:45.086 [2024-11-20 13:43:56.962501] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:45.086 13:43:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.086 13:43:56 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:24:45.086 13:43:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.086 13:43:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.346 malloc0 00:24:45.346 13:43:57 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.346 13:43:57 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:24:45.346 13:43:57 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.346 13:43:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.346 [2024-11-20 13:43:57.135858] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:24:45.346 [2024-11-20 13:43:57.135923] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:24:45.346 [2024-11-20 13:43:57.135936] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:24:45.346 [2024-11-20 13:43:57.143685] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:24:45.346 [2024-11-20 13:43:57.143726] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:24:45.346 [2024-11-20 13:43:57.143738] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:24:45.346 [2024-11-20 13:43:57.143855] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:24:45.346 1 00:24:45.346 13:43:57 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.346 13:43:57 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76071 00:24:45.346 [2024-11-20 13:43:57.151652] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:24:45.346 [2024-11-20 13:43:57.156093] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:24:45.346 [2024-11-20 13:43:57.161984] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:24:45.346 [2024-11-20 13:43:57.162024] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:25:41.664 00:25:41.664 fio_test: (groupid=0, jobs=1): err= 0: pid=76083: Wed Nov 20 13:44:45 2024 00:25:41.664 read: IOPS=19.3k, BW=75.3MiB/s (79.0MB/s)(4521MiB/60002msec) 00:25:41.664 slat (nsec): min=1978, max=621736, avg=8528.05, stdev=3334.83 00:25:41.664 clat (usec): min=1065, max=6567.0k, avg=3293.18, stdev=50673.74 00:25:41.664 lat (usec): min=1069, max=6567.0k, avg=3301.71, stdev=50673.72 00:25:41.664 clat percentiles (usec): 00:25:41.664 | 1.00th=[ 2073], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2442], 00:25:41.664 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2638], 60.00th=[ 2868], 00:25:41.664 | 70.00th=[ 3130], 80.00th=[ 3261], 90.00th=[ 3523], 95.00th=[ 4146], 00:25:41.664 | 99.00th=[ 5735], 99.50th=[ 6259], 99.90th=[ 8291], 99.95th=[ 9503], 00:25:41.664 | 99.99th=[13173] 00:25:41.664 bw ( KiB/s): min=16664, max=102544, per=100.00%, avg=85768.84, stdev=13705.51, samples=107 00:25:41.664 iops : min= 4166, max=25636, avg=21442.18, stdev=3426.40, samples=107 00:25:41.664 write: IOPS=19.3k, BW=75.3MiB/s (79.0MB/s)(4519MiB/60002msec); 0 zone resets 00:25:41.664 slat (usec): min=2, max=464, avg= 8.53, stdev= 3.21 00:25:41.664 clat (usec): min=1084, max=6567.0k, avg=3325.40, stdev=46870.58 00:25:41.664 lat (usec): min=1091, max=6567.0k, avg=3333.93, stdev=46870.56 00:25:41.664 clat percentiles (usec): 00:25:41.664 | 1.00th=[ 2073], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2540], 00:25:41.664 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2933], 00:25:41.664 | 70.00th=[ 3228], 80.00th=[ 3392], 90.00th=[ 3621], 95.00th=[ 4146], 00:25:41.664 | 99.00th=[ 5735], 99.50th=[ 6390], 99.90th=[ 8356], 99.95th=[ 9503], 00:25:41.664 | 99.99th=[13304] 00:25:41.664 bw ( KiB/s): min=16760, max=101208, per=100.00%, avg=85729.80, stdev=13691.40, samples=107 00:25:41.664 iops : min= 4190, max=25302, avg=21432.42, stdev=3422.85, samples=107 00:25:41.664 lat (msec) : 2=0.56%, 4=93.57%, 10=5.83%, 20=0.03%, >=2000=0.01% 00:25:41.664 cpu : usr=12.52%, sys=32.92%, ctx=100440, majf=0, minf=13 00:25:41.664 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:25:41.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:41.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:41.664 issued rwts: total=1157382,1156827,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:41.664 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:41.664 00:25:41.664 Run status group 0 (all jobs): 00:25:41.664 READ: bw=75.3MiB/s (79.0MB/s), 75.3MiB/s-75.3MiB/s (79.0MB/s-79.0MB/s), io=4521MiB (4741MB), run=60002-60002msec 00:25:41.664 WRITE: bw=75.3MiB/s (79.0MB/s), 75.3MiB/s-75.3MiB/s (79.0MB/s-79.0MB/s), io=4519MiB (4738MB), run=60002-60002msec 00:25:41.664 00:25:41.664 Disk stats (read/write): 00:25:41.664 ublkb1: ios=1154653/1154155, merge=0/0, ticks=3684832/3588113, in_queue=7272945, util=99.97% 00:25:41.664 13:44:45 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:25:41.664 13:44:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.664 13:44:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.664 [2024-11-20 13:44:45.888823] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:25:41.664 [2024-11-20 13:44:45.926678] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:41.664 [2024-11-20 13:44:45.927162] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:25:41.664 [2024-11-20 13:44:45.935678] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:41.665 [2024-11-20 13:44:45.939805] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:25:41.665 [2024-11-20 13:44:45.939833] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:25:41.665 13:44:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.665 13:44:45 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:25:41.665 13:44:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.665 13:44:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.665 [2024-11-20 13:44:45.950787] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:25:41.665 [2024-11-20 13:44:45.958639] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:25:41.665 [2024-11-20 13:44:45.958689] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:25:41.665 13:44:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.665 13:44:45 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:25:41.665 13:44:45 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:25:41.665 13:44:45 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76181 00:25:41.665 13:44:45 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76181 ']' 00:25:41.665 13:44:45 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76181 00:25:41.665 13:44:45 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:25:41.665 13:44:45 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:41.665 13:44:45 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76181 00:25:41.665 killing process with pid 76181 00:25:41.665 13:44:46 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:41.665 13:44:46 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:41.665 13:44:46 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76181' 00:25:41.665 13:44:46 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76181 00:25:41.665 13:44:46 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76181 00:25:41.665 [2024-11-20 13:44:47.666367] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:25:41.665 [2024-11-20 13:44:47.666691] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:25:41.665 ************************************ 00:25:41.665 END TEST ublk_recovery 00:25:41.665 ************************************ 00:25:41.665 00:25:41.665 real 1m6.352s 00:25:41.665 user 1m50.946s 00:25:41.665 sys 0m38.182s 00:25:41.665 13:44:49 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:41.665 13:44:49 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.665 13:44:49 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:25:41.665 13:44:49 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:25:41.665 13:44:49 -- spdk/autotest.sh@260 -- # timing_exit lib 00:25:41.665 13:44:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:41.665 13:44:49 -- common/autotest_common.sh@10 -- # set +x 00:25:41.665 13:44:49 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:25:41.665 13:44:49 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:25:41.665 13:44:49 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:25:41.665 13:44:49 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:41.665 13:44:49 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:41.665 13:44:49 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:25:41.665 13:44:49 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:25:41.665 13:44:49 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:41.665 13:44:49 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:41.665 13:44:49 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:25:41.665 13:44:49 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:41.665 13:44:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:41.665 13:44:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:41.665 13:44:49 -- common/autotest_common.sh@10 -- # set +x 00:25:41.665 ************************************ 00:25:41.665 START TEST ftl 00:25:41.665 ************************************ 00:25:41.665 13:44:49 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:41.665 * Looking for test storage... 00:25:41.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:41.665 13:44:49 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:41.665 13:44:49 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:25:41.665 13:44:49 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:41.665 13:44:49 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:41.665 13:44:49 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:41.665 13:44:49 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:41.665 13:44:49 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:41.665 13:44:49 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:25:41.665 13:44:49 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:25:41.665 13:44:49 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:25:41.665 13:44:49 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:25:41.665 13:44:49 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:25:41.665 13:44:49 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:25:41.665 13:44:49 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:25:41.665 13:44:49 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:41.665 13:44:49 ftl -- scripts/common.sh@344 -- # case "$op" in 00:25:41.665 13:44:49 ftl -- scripts/common.sh@345 -- # : 1 00:25:41.665 13:44:49 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:41.665 13:44:49 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:41.665 13:44:49 ftl -- scripts/common.sh@365 -- # decimal 1 00:25:41.665 13:44:49 ftl -- scripts/common.sh@353 -- # local d=1 00:25:41.665 13:44:49 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:41.665 13:44:49 ftl -- scripts/common.sh@355 -- # echo 1 00:25:41.665 13:44:49 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:25:41.665 13:44:49 ftl -- scripts/common.sh@366 -- # decimal 2 00:25:41.665 13:44:49 ftl -- scripts/common.sh@353 -- # local d=2 00:25:41.665 13:44:49 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:41.665 13:44:49 ftl -- scripts/common.sh@355 -- # echo 2 00:25:41.665 13:44:49 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:25:41.665 13:44:49 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:41.665 13:44:49 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:41.665 13:44:49 ftl -- scripts/common.sh@368 -- # return 0 00:25:41.665 13:44:49 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:41.665 13:44:49 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:41.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.665 --rc genhtml_branch_coverage=1 00:25:41.665 --rc genhtml_function_coverage=1 00:25:41.665 --rc genhtml_legend=1 00:25:41.665 --rc geninfo_all_blocks=1 00:25:41.665 --rc geninfo_unexecuted_blocks=1 00:25:41.665 00:25:41.665 ' 00:25:41.665 13:44:49 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:41.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.665 --rc genhtml_branch_coverage=1 00:25:41.665 --rc genhtml_function_coverage=1 00:25:41.665 --rc genhtml_legend=1 00:25:41.665 --rc geninfo_all_blocks=1 00:25:41.665 --rc geninfo_unexecuted_blocks=1 00:25:41.665 00:25:41.665 ' 00:25:41.665 13:44:49 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:41.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.665 --rc genhtml_branch_coverage=1 00:25:41.666 --rc genhtml_function_coverage=1 00:25:41.666 --rc genhtml_legend=1 00:25:41.666 --rc geninfo_all_blocks=1 00:25:41.666 --rc geninfo_unexecuted_blocks=1 00:25:41.666 00:25:41.666 ' 00:25:41.666 13:44:49 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:41.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.666 --rc genhtml_branch_coverage=1 00:25:41.666 --rc genhtml_function_coverage=1 00:25:41.666 --rc genhtml_legend=1 00:25:41.666 --rc geninfo_all_blocks=1 00:25:41.666 --rc geninfo_unexecuted_blocks=1 00:25:41.666 00:25:41.666 ' 00:25:41.666 13:44:49 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:41.666 13:44:49 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:41.666 13:44:49 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:41.666 13:44:49 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:41.666 13:44:49 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:41.666 13:44:49 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:41.666 13:44:49 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:41.666 13:44:49 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:41.666 13:44:49 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:41.666 13:44:49 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:41.666 13:44:49 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:41.666 13:44:49 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:41.666 13:44:49 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:41.666 13:44:49 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:41.666 13:44:49 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:41.666 13:44:49 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:41.666 13:44:49 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:41.666 13:44:49 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:41.666 13:44:49 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:41.666 13:44:49 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:41.666 13:44:49 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:41.666 13:44:49 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:41.666 13:44:49 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:41.666 13:44:49 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:41.666 13:44:49 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:41.666 13:44:49 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:41.666 13:44:49 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:41.666 13:44:49 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:41.666 13:44:49 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:41.666 13:44:49 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:41.666 13:44:49 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:25:41.666 13:44:49 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:25:41.666 13:44:49 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:25:41.666 13:44:49 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:25:41.666 13:44:49 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:41.666 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:41.666 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:41.666 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:41.666 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:41.666 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:41.666 13:44:50 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76987 00:25:41.666 13:44:50 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:25:41.666 13:44:50 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76987 00:25:41.666 13:44:50 ftl -- common/autotest_common.sh@835 -- # '[' -z 76987 ']' 00:25:41.666 13:44:50 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.666 13:44:50 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:41.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.666 13:44:50 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.666 13:44:50 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:41.666 13:44:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:41.666 [2024-11-20 13:44:50.582347] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:25:41.666 [2024-11-20 13:44:50.582559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76987 ] 00:25:41.666 [2024-11-20 13:44:50.785694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.666 [2024-11-20 13:44:50.908068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.666 13:44:51 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:41.666 13:44:51 ftl -- common/autotest_common.sh@868 -- # return 0 00:25:41.666 13:44:51 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:25:41.666 13:44:51 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:25:41.666 13:44:52 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:25:41.666 13:44:52 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:41.666 13:44:53 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:25:41.666 13:44:53 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:25:41.666 13:44:53 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:25:41.666 13:44:53 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:25:41.666 13:44:53 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:25:41.666 13:44:53 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:25:41.666 13:44:53 ftl -- ftl/ftl.sh@50 -- # break 00:25:41.666 13:44:53 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:25:41.666 13:44:53 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:25:41.666 13:44:53 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:25:41.666 13:44:53 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:25:41.926 13:44:53 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:25:41.926 13:44:53 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:25:41.926 13:44:53 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:25:41.926 13:44:53 ftl -- ftl/ftl.sh@63 -- # break 00:25:41.926 13:44:53 ftl -- ftl/ftl.sh@66 -- # killprocess 76987 00:25:41.926 13:44:53 ftl -- common/autotest_common.sh@954 -- # '[' -z 76987 ']' 00:25:41.926 13:44:53 ftl -- common/autotest_common.sh@958 -- # kill -0 76987 00:25:41.926 13:44:53 ftl -- common/autotest_common.sh@959 -- # uname 00:25:41.926 13:44:53 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:41.926 13:44:53 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76987 00:25:41.926 killing process with pid 76987 00:25:41.926 13:44:53 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:41.926 13:44:53 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:41.926 13:44:53 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76987' 00:25:41.926 13:44:53 ftl -- common/autotest_common.sh@973 -- # kill 76987 00:25:41.926 13:44:53 ftl -- common/autotest_common.sh@978 -- # wait 76987 00:25:44.460 13:44:56 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:25:44.460 13:44:56 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:25:44.460 13:44:56 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:44.460 13:44:56 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:44.460 13:44:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:44.460 ************************************ 00:25:44.460 START TEST ftl_fio_basic 00:25:44.460 ************************************ 00:25:44.460 13:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:25:44.460 * Looking for test storage... 00:25:44.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:44.460 13:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:44.460 13:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:25:44.460 13:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:44.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.720 --rc genhtml_branch_coverage=1 00:25:44.720 --rc genhtml_function_coverage=1 00:25:44.720 --rc genhtml_legend=1 00:25:44.720 --rc geninfo_all_blocks=1 00:25:44.720 --rc geninfo_unexecuted_blocks=1 00:25:44.720 00:25:44.720 ' 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:44.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.720 --rc genhtml_branch_coverage=1 00:25:44.720 --rc genhtml_function_coverage=1 00:25:44.720 --rc genhtml_legend=1 00:25:44.720 --rc geninfo_all_blocks=1 00:25:44.720 --rc geninfo_unexecuted_blocks=1 00:25:44.720 00:25:44.720 ' 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:44.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.720 --rc genhtml_branch_coverage=1 00:25:44.720 --rc genhtml_function_coverage=1 00:25:44.720 --rc genhtml_legend=1 00:25:44.720 --rc geninfo_all_blocks=1 00:25:44.720 --rc geninfo_unexecuted_blocks=1 00:25:44.720 00:25:44.720 ' 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:44.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.720 --rc genhtml_branch_coverage=1 00:25:44.720 --rc genhtml_function_coverage=1 00:25:44.720 --rc genhtml_legend=1 00:25:44.720 --rc geninfo_all_blocks=1 00:25:44.720 --rc geninfo_unexecuted_blocks=1 00:25:44.720 00:25:44.720 ' 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:44.720 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77137 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77137 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77137 ']' 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:44.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:44.721 13:44:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:44.721 [2024-11-20 13:44:56.656972] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:25:44.721 [2024-11-20 13:44:56.657111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77137 ] 00:25:44.983 [2024-11-20 13:44:56.859508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:45.243 [2024-11-20 13:44:57.023950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.243 [2024-11-20 13:44:57.024134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.243 [2024-11-20 13:44:57.024183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:46.181 13:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:46.181 13:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:25:46.181 13:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:46.181 13:44:57 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:25:46.181 13:44:57 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:46.181 13:44:57 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:25:46.181 13:44:57 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:25:46.181 13:44:57 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:46.440 13:44:58 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:46.440 13:44:58 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:25:46.440 13:44:58 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:46.440 13:44:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:46.440 13:44:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:46.440 13:44:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:46.440 13:44:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:46.440 13:44:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:46.700 13:44:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:46.700 { 00:25:46.700 "name": "nvme0n1", 00:25:46.700 "aliases": [ 00:25:46.700 "83e08cb0-499e-4045-a5b8-c1bc46008203" 00:25:46.700 ], 00:25:46.700 "product_name": "NVMe disk", 00:25:46.700 "block_size": 4096, 00:25:46.700 "num_blocks": 1310720, 00:25:46.700 "uuid": "83e08cb0-499e-4045-a5b8-c1bc46008203", 00:25:46.700 "numa_id": -1, 00:25:46.700 "assigned_rate_limits": { 00:25:46.700 "rw_ios_per_sec": 0, 00:25:46.700 "rw_mbytes_per_sec": 0, 00:25:46.700 "r_mbytes_per_sec": 0, 00:25:46.700 "w_mbytes_per_sec": 0 00:25:46.700 }, 00:25:46.700 "claimed": false, 00:25:46.700 "zoned": false, 00:25:46.700 "supported_io_types": { 00:25:46.700 "read": true, 00:25:46.700 "write": true, 00:25:46.700 "unmap": true, 00:25:46.700 "flush": true, 00:25:46.700 "reset": true, 00:25:46.700 "nvme_admin": true, 00:25:46.700 "nvme_io": true, 00:25:46.700 "nvme_io_md": false, 00:25:46.700 "write_zeroes": true, 00:25:46.700 "zcopy": false, 00:25:46.700 "get_zone_info": false, 00:25:46.700 "zone_management": false, 00:25:46.700 "zone_append": false, 00:25:46.700 "compare": true, 00:25:46.700 "compare_and_write": false, 00:25:46.700 "abort": true, 00:25:46.700 "seek_hole": false, 00:25:46.700 "seek_data": false, 00:25:46.700 "copy": true, 00:25:46.700 "nvme_iov_md": false 00:25:46.700 }, 00:25:46.700 "driver_specific": { 00:25:46.700 "nvme": [ 00:25:46.700 { 00:25:46.700 "pci_address": "0000:00:11.0", 00:25:46.700 "trid": { 00:25:46.700 "trtype": "PCIe", 00:25:46.700 "traddr": "0000:00:11.0" 00:25:46.700 }, 00:25:46.700 "ctrlr_data": { 00:25:46.700 "cntlid": 0, 00:25:46.700 "vendor_id": "0x1b36", 00:25:46.700 "model_number": "QEMU NVMe Ctrl", 00:25:46.700 "serial_number": "12341", 00:25:46.700 "firmware_revision": "8.0.0", 00:25:46.700 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:46.700 "oacs": { 00:25:46.700 "security": 0, 00:25:46.700 "format": 1, 00:25:46.700 "firmware": 0, 00:25:46.700 "ns_manage": 1 00:25:46.700 }, 00:25:46.700 "multi_ctrlr": false, 00:25:46.700 "ana_reporting": false 00:25:46.700 }, 00:25:46.700 "vs": { 00:25:46.700 "nvme_version": "1.4" 00:25:46.700 }, 00:25:46.700 "ns_data": { 00:25:46.700 "id": 1, 00:25:46.700 "can_share": false 00:25:46.700 } 00:25:46.700 } 00:25:46.700 ], 00:25:46.700 "mp_policy": "active_passive" 00:25:46.700 } 00:25:46.700 } 00:25:46.700 ]' 00:25:46.700 13:44:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:46.700 13:44:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:46.700 13:44:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:46.700 13:44:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:46.700 13:44:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:46.700 13:44:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:25:46.700 13:44:58 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:25:46.700 13:44:58 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:46.700 13:44:58 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:25:46.700 13:44:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:46.700 13:44:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:46.959 13:44:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:25:46.959 13:44:58 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:47.218 13:44:59 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=3b0a0fa7-2ef0-4159-b2bc-94408c9d7e7b 00:25:47.218 13:44:59 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3b0a0fa7-2ef0-4159-b2bc-94408c9d7e7b 00:25:47.476 13:44:59 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=b51b6846-20e8-45fc-b065-7b9b8f217cf8 00:25:47.476 13:44:59 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b51b6846-20e8-45fc-b065-7b9b8f217cf8 00:25:47.476 13:44:59 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:25:47.476 13:44:59 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:47.476 13:44:59 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=b51b6846-20e8-45fc-b065-7b9b8f217cf8 00:25:47.476 13:44:59 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:25:47.476 13:44:59 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size b51b6846-20e8-45fc-b065-7b9b8f217cf8 00:25:47.476 13:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b51b6846-20e8-45fc-b065-7b9b8f217cf8 00:25:47.476 13:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:47.476 13:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:47.476 13:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:47.476 13:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b51b6846-20e8-45fc-b065-7b9b8f217cf8 00:25:47.735 13:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:47.735 { 00:25:47.735 "name": "b51b6846-20e8-45fc-b065-7b9b8f217cf8", 00:25:47.735 "aliases": [ 00:25:47.735 "lvs/nvme0n1p0" 00:25:47.735 ], 00:25:47.735 "product_name": "Logical Volume", 00:25:47.735 "block_size": 4096, 00:25:47.735 "num_blocks": 26476544, 00:25:47.735 "uuid": "b51b6846-20e8-45fc-b065-7b9b8f217cf8", 00:25:47.735 "assigned_rate_limits": { 00:25:47.735 "rw_ios_per_sec": 0, 00:25:47.735 "rw_mbytes_per_sec": 0, 00:25:47.735 "r_mbytes_per_sec": 0, 00:25:47.735 "w_mbytes_per_sec": 0 00:25:47.735 }, 00:25:47.735 "claimed": false, 00:25:47.735 "zoned": false, 00:25:47.735 "supported_io_types": { 00:25:47.735 "read": true, 00:25:47.735 "write": true, 00:25:47.735 "unmap": true, 00:25:47.735 "flush": false, 00:25:47.735 "reset": true, 00:25:47.735 "nvme_admin": false, 00:25:47.735 "nvme_io": false, 00:25:47.735 "nvme_io_md": false, 00:25:47.735 "write_zeroes": true, 00:25:47.735 "zcopy": false, 00:25:47.735 "get_zone_info": false, 00:25:47.735 "zone_management": false, 00:25:47.735 "zone_append": false, 00:25:47.735 "compare": false, 00:25:47.735 "compare_and_write": false, 00:25:47.735 "abort": false, 00:25:47.735 "seek_hole": true, 00:25:47.735 "seek_data": true, 00:25:47.735 "copy": false, 00:25:47.735 "nvme_iov_md": false 00:25:47.735 }, 00:25:47.735 "driver_specific": { 00:25:47.735 "lvol": { 00:25:47.735 "lvol_store_uuid": "3b0a0fa7-2ef0-4159-b2bc-94408c9d7e7b", 00:25:47.735 "base_bdev": "nvme0n1", 00:25:47.735 "thin_provision": true, 00:25:47.735 "num_allocated_clusters": 0, 00:25:47.735 "snapshot": false, 00:25:47.735 "clone": false, 00:25:47.735 "esnap_clone": false 00:25:47.735 } 00:25:47.735 } 00:25:47.735 } 00:25:47.735 ]' 00:25:47.735 13:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:47.735 13:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:47.735 13:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:47.735 13:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:47.735 13:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:47.735 13:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:25:47.735 13:44:59 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:25:47.735 13:44:59 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:25:47.735 13:44:59 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:47.995 13:44:59 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:47.995 13:44:59 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:47.995 13:44:59 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size b51b6846-20e8-45fc-b065-7b9b8f217cf8 00:25:47.995 13:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b51b6846-20e8-45fc-b065-7b9b8f217cf8 00:25:47.995 13:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:47.995 13:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:47.995 13:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:47.995 13:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b51b6846-20e8-45fc-b065-7b9b8f217cf8 00:25:48.255 13:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:48.255 { 00:25:48.255 "name": "b51b6846-20e8-45fc-b065-7b9b8f217cf8", 00:25:48.255 "aliases": [ 00:25:48.255 "lvs/nvme0n1p0" 00:25:48.255 ], 00:25:48.255 "product_name": "Logical Volume", 00:25:48.255 "block_size": 4096, 00:25:48.255 "num_blocks": 26476544, 00:25:48.255 "uuid": "b51b6846-20e8-45fc-b065-7b9b8f217cf8", 00:25:48.255 "assigned_rate_limits": { 00:25:48.255 "rw_ios_per_sec": 0, 00:25:48.255 "rw_mbytes_per_sec": 0, 00:25:48.255 "r_mbytes_per_sec": 0, 00:25:48.255 "w_mbytes_per_sec": 0 00:25:48.255 }, 00:25:48.255 "claimed": false, 00:25:48.255 "zoned": false, 00:25:48.255 "supported_io_types": { 00:25:48.255 "read": true, 00:25:48.255 "write": true, 00:25:48.255 "unmap": true, 00:25:48.255 "flush": false, 00:25:48.255 "reset": true, 00:25:48.255 "nvme_admin": false, 00:25:48.255 "nvme_io": false, 00:25:48.255 "nvme_io_md": false, 00:25:48.255 "write_zeroes": true, 00:25:48.255 "zcopy": false, 00:25:48.255 "get_zone_info": false, 00:25:48.255 "zone_management": false, 00:25:48.255 "zone_append": false, 00:25:48.255 "compare": false, 00:25:48.255 "compare_and_write": false, 00:25:48.255 "abort": false, 00:25:48.255 "seek_hole": true, 00:25:48.255 "seek_data": true, 00:25:48.255 "copy": false, 00:25:48.255 "nvme_iov_md": false 00:25:48.255 }, 00:25:48.255 "driver_specific": { 00:25:48.255 "lvol": { 00:25:48.255 "lvol_store_uuid": "3b0a0fa7-2ef0-4159-b2bc-94408c9d7e7b", 00:25:48.255 "base_bdev": "nvme0n1", 00:25:48.255 "thin_provision": true, 00:25:48.255 "num_allocated_clusters": 0, 00:25:48.255 "snapshot": false, 00:25:48.255 "clone": false, 00:25:48.255 "esnap_clone": false 00:25:48.255 } 00:25:48.255 } 00:25:48.255 } 00:25:48.255 ]' 00:25:48.255 13:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:48.255 13:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:48.255 13:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:48.255 13:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:48.255 13:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:48.255 13:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:25:48.255 13:45:00 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:25:48.255 13:45:00 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:48.514 13:45:00 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:25:48.514 13:45:00 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:25:48.514 13:45:00 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:25:48.514 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:25:48.514 13:45:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size b51b6846-20e8-45fc-b065-7b9b8f217cf8 00:25:48.514 13:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=b51b6846-20e8-45fc-b065-7b9b8f217cf8 00:25:48.514 13:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:48.514 13:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:48.514 13:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:48.514 13:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b51b6846-20e8-45fc-b065-7b9b8f217cf8 00:25:48.774 13:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:48.774 { 00:25:48.774 "name": "b51b6846-20e8-45fc-b065-7b9b8f217cf8", 00:25:48.774 "aliases": [ 00:25:48.774 "lvs/nvme0n1p0" 00:25:48.774 ], 00:25:48.774 "product_name": "Logical Volume", 00:25:48.774 "block_size": 4096, 00:25:48.774 "num_blocks": 26476544, 00:25:48.774 "uuid": "b51b6846-20e8-45fc-b065-7b9b8f217cf8", 00:25:48.774 "assigned_rate_limits": { 00:25:48.774 "rw_ios_per_sec": 0, 00:25:48.774 "rw_mbytes_per_sec": 0, 00:25:48.774 "r_mbytes_per_sec": 0, 00:25:48.774 "w_mbytes_per_sec": 0 00:25:48.774 }, 00:25:48.774 "claimed": false, 00:25:48.774 "zoned": false, 00:25:48.774 "supported_io_types": { 00:25:48.774 "read": true, 00:25:48.774 "write": true, 00:25:48.774 "unmap": true, 00:25:48.774 "flush": false, 00:25:48.774 "reset": true, 00:25:48.774 "nvme_admin": false, 00:25:48.774 "nvme_io": false, 00:25:48.774 "nvme_io_md": false, 00:25:48.774 "write_zeroes": true, 00:25:48.774 "zcopy": false, 00:25:48.774 "get_zone_info": false, 00:25:48.774 "zone_management": false, 00:25:48.774 "zone_append": false, 00:25:48.774 "compare": false, 00:25:48.774 "compare_and_write": false, 00:25:48.774 "abort": false, 00:25:48.774 "seek_hole": true, 00:25:48.774 "seek_data": true, 00:25:48.774 "copy": false, 00:25:48.774 "nvme_iov_md": false 00:25:48.774 }, 00:25:48.774 "driver_specific": { 00:25:48.774 "lvol": { 00:25:48.774 "lvol_store_uuid": "3b0a0fa7-2ef0-4159-b2bc-94408c9d7e7b", 00:25:48.774 "base_bdev": "nvme0n1", 00:25:48.774 "thin_provision": true, 00:25:48.774 "num_allocated_clusters": 0, 00:25:48.774 "snapshot": false, 00:25:48.774 "clone": false, 00:25:48.774 "esnap_clone": false 00:25:48.774 } 00:25:48.774 } 00:25:48.774 } 00:25:48.774 ]' 00:25:48.774 13:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:48.774 13:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:48.774 13:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:48.774 13:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:48.774 13:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:48.774 13:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:25:48.774 13:45:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:25:48.774 13:45:00 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:25:48.774 13:45:00 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b51b6846-20e8-45fc-b065-7b9b8f217cf8 -c nvc0n1p0 --l2p_dram_limit 60 00:25:49.043 [2024-11-20 13:45:00.892727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.043 [2024-11-20 13:45:00.892786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:49.043 [2024-11-20 13:45:00.892808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:49.043 [2024-11-20 13:45:00.892819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.043 [2024-11-20 13:45:00.892913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.043 [2024-11-20 13:45:00.892930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:49.043 [2024-11-20 13:45:00.892945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:49.043 [2024-11-20 13:45:00.892956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.043 [2024-11-20 13:45:00.892992] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:49.043 [2024-11-20 13:45:00.894052] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:49.043 [2024-11-20 13:45:00.894093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.043 [2024-11-20 13:45:00.894105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:49.043 [2024-11-20 13:45:00.894120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:25:49.043 [2024-11-20 13:45:00.894130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.043 [2024-11-20 13:45:00.894260] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f8813801-64f0-401e-8328-13cb255c0593 00:25:49.043 [2024-11-20 13:45:00.895823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.043 [2024-11-20 13:45:00.895865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:49.043 [2024-11-20 13:45:00.895878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:49.043 [2024-11-20 13:45:00.895891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.043 [2024-11-20 13:45:00.903287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.043 [2024-11-20 13:45:00.903324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:49.043 [2024-11-20 13:45:00.903338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.341 ms 00:25:49.043 [2024-11-20 13:45:00.903352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.043 [2024-11-20 13:45:00.903473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.043 [2024-11-20 13:45:00.903491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:49.043 [2024-11-20 13:45:00.903503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:25:49.043 [2024-11-20 13:45:00.903520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.043 [2024-11-20 13:45:00.903615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.043 [2024-11-20 13:45:00.903631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:49.043 [2024-11-20 13:45:00.903642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:49.043 [2024-11-20 13:45:00.903656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.043 [2024-11-20 13:45:00.903688] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:49.043 [2024-11-20 13:45:00.909168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.043 [2024-11-20 13:45:00.909213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:49.043 [2024-11-20 13:45:00.909232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.495 ms 00:25:49.043 [2024-11-20 13:45:00.909247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.043 [2024-11-20 13:45:00.909294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.043 [2024-11-20 13:45:00.909305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:49.043 [2024-11-20 13:45:00.909319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:49.043 [2024-11-20 13:45:00.909329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.043 [2024-11-20 13:45:00.909379] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:49.043 [2024-11-20 13:45:00.909522] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:49.043 [2024-11-20 13:45:00.909548] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:49.043 [2024-11-20 13:45:00.909563] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:49.043 [2024-11-20 13:45:00.909579] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:49.043 [2024-11-20 13:45:00.909592] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:49.043 [2024-11-20 13:45:00.909618] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:49.043 [2024-11-20 13:45:00.909629] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:49.043 [2024-11-20 13:45:00.909642] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:49.043 [2024-11-20 13:45:00.909652] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:49.043 [2024-11-20 13:45:00.909666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.043 [2024-11-20 13:45:00.909679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:49.043 [2024-11-20 13:45:00.909694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:25:49.043 [2024-11-20 13:45:00.909704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.043 [2024-11-20 13:45:00.909788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.043 [2024-11-20 13:45:00.909799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:49.043 [2024-11-20 13:45:00.909813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:49.044 [2024-11-20 13:45:00.909822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.044 [2024-11-20 13:45:00.909931] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:49.044 [2024-11-20 13:45:00.909943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:49.044 [2024-11-20 13:45:00.909959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:49.044 [2024-11-20 13:45:00.909970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.044 [2024-11-20 13:45:00.909984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:49.044 [2024-11-20 13:45:00.909993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:49.044 [2024-11-20 13:45:00.910006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:49.044 [2024-11-20 13:45:00.910015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:49.044 [2024-11-20 13:45:00.910028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:49.044 [2024-11-20 13:45:00.910037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:49.044 [2024-11-20 13:45:00.910049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:49.044 [2024-11-20 13:45:00.910058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:49.044 [2024-11-20 13:45:00.910070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:49.044 [2024-11-20 13:45:00.910080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:49.044 [2024-11-20 13:45:00.910092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:49.044 [2024-11-20 13:45:00.910101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.044 [2024-11-20 13:45:00.910117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:49.044 [2024-11-20 13:45:00.910127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:49.044 [2024-11-20 13:45:00.910138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.044 [2024-11-20 13:45:00.910148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:49.044 [2024-11-20 13:45:00.910160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:49.044 [2024-11-20 13:45:00.910176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.044 [2024-11-20 13:45:00.910188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:49.044 [2024-11-20 13:45:00.910198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:49.044 [2024-11-20 13:45:00.910226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.044 [2024-11-20 13:45:00.910236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:49.044 [2024-11-20 13:45:00.910248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:49.044 [2024-11-20 13:45:00.910258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.044 [2024-11-20 13:45:00.910271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:49.044 [2024-11-20 13:45:00.910281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:49.044 [2024-11-20 13:45:00.910293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.044 [2024-11-20 13:45:00.910303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:49.044 [2024-11-20 13:45:00.910318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:49.044 [2024-11-20 13:45:00.910328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:49.044 [2024-11-20 13:45:00.910340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:49.044 [2024-11-20 13:45:00.910365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:49.044 [2024-11-20 13:45:00.910378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:49.044 [2024-11-20 13:45:00.910388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:49.044 [2024-11-20 13:45:00.910400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:49.044 [2024-11-20 13:45:00.910410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.044 [2024-11-20 13:45:00.910422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:49.044 [2024-11-20 13:45:00.910432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:49.044 [2024-11-20 13:45:00.910449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.044 [2024-11-20 13:45:00.910459] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:49.044 [2024-11-20 13:45:00.910473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:49.044 [2024-11-20 13:45:00.910484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:49.044 [2024-11-20 13:45:00.910497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.044 [2024-11-20 13:45:00.910508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:49.044 [2024-11-20 13:45:00.910525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:49.044 [2024-11-20 13:45:00.910535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:49.044 [2024-11-20 13:45:00.910548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:49.044 [2024-11-20 13:45:00.910558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:49.044 [2024-11-20 13:45:00.910571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:49.044 [2024-11-20 13:45:00.910586] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:49.044 [2024-11-20 13:45:00.910612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:49.044 [2024-11-20 13:45:00.910626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:49.044 [2024-11-20 13:45:00.910640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:49.044 [2024-11-20 13:45:00.910652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:49.044 [2024-11-20 13:45:00.910666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:49.044 [2024-11-20 13:45:00.910677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:49.044 [2024-11-20 13:45:00.910691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:49.044 [2024-11-20 13:45:00.910702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:49.044 [2024-11-20 13:45:00.910716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:49.044 [2024-11-20 13:45:00.910727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:49.044 [2024-11-20 13:45:00.910744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:49.044 [2024-11-20 13:45:00.910755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:49.044 [2024-11-20 13:45:00.910770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:49.044 [2024-11-20 13:45:00.910781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:49.044 [2024-11-20 13:45:00.910794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:49.044 [2024-11-20 13:45:00.910806] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:49.044 [2024-11-20 13:45:00.910821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:49.044 [2024-11-20 13:45:00.910836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:49.044 [2024-11-20 13:45:00.910850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:49.044 [2024-11-20 13:45:00.910861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:49.044 [2024-11-20 13:45:00.910876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:49.044 [2024-11-20 13:45:00.910888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.044 [2024-11-20 13:45:00.910902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:49.044 [2024-11-20 13:45:00.910913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:25:49.044 [2024-11-20 13:45:00.910927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.044 [2024-11-20 13:45:00.910988] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:49.044 [2024-11-20 13:45:00.911008] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:53.252 [2024-11-20 13:45:05.186031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.252 [2024-11-20 13:45:05.186111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:53.252 [2024-11-20 13:45:05.186130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4281.980 ms 00:25:53.252 [2024-11-20 13:45:05.186144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.512 [2024-11-20 13:45:05.226327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.512 [2024-11-20 13:45:05.226401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:53.512 [2024-11-20 13:45:05.226421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.858 ms 00:25:53.512 [2024-11-20 13:45:05.226437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.512 [2024-11-20 13:45:05.226640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.512 [2024-11-20 13:45:05.226660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:53.512 [2024-11-20 13:45:05.226680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:25:53.512 [2024-11-20 13:45:05.226698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.512 [2024-11-20 13:45:05.286796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.512 [2024-11-20 13:45:05.286865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:53.512 [2024-11-20 13:45:05.286888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.133 ms 00:25:53.512 [2024-11-20 13:45:05.286904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.512 [2024-11-20 13:45:05.286968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.512 [2024-11-20 13:45:05.286983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:53.512 [2024-11-20 13:45:05.286995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:53.512 [2024-11-20 13:45:05.287008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.512 [2024-11-20 13:45:05.287552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.512 [2024-11-20 13:45:05.287579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:53.512 [2024-11-20 13:45:05.287591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:25:53.512 [2024-11-20 13:45:05.287618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.512 [2024-11-20 13:45:05.287755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.512 [2024-11-20 13:45:05.287777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:53.512 [2024-11-20 13:45:05.287789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:25:53.512 [2024-11-20 13:45:05.287806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.512 [2024-11-20 13:45:05.309156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.512 [2024-11-20 13:45:05.309225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:53.512 [2024-11-20 13:45:05.309245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.350 ms 00:25:53.512 [2024-11-20 13:45:05.309259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.512 [2024-11-20 13:45:05.322908] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:53.512 [2024-11-20 13:45:05.339647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.512 [2024-11-20 13:45:05.339738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:53.512 [2024-11-20 13:45:05.339761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.268 ms 00:25:53.512 [2024-11-20 13:45:05.339792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.512 [2024-11-20 13:45:05.426993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.512 [2024-11-20 13:45:05.427062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:53.512 [2024-11-20 13:45:05.427089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.270 ms 00:25:53.512 [2024-11-20 13:45:05.427101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.512 [2024-11-20 13:45:05.427365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.512 [2024-11-20 13:45:05.427383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:53.513 [2024-11-20 13:45:05.427402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:25:53.513 [2024-11-20 13:45:05.427413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.513 [2024-11-20 13:45:05.466392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.513 [2024-11-20 13:45:05.466459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:53.513 [2024-11-20 13:45:05.466481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.965 ms 00:25:53.513 [2024-11-20 13:45:05.466493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.800 [2024-11-20 13:45:05.504502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.800 [2024-11-20 13:45:05.504559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:53.800 [2024-11-20 13:45:05.504581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.992 ms 00:25:53.800 [2024-11-20 13:45:05.504592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.800 [2024-11-20 13:45:05.505356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.800 [2024-11-20 13:45:05.505381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:53.800 [2024-11-20 13:45:05.505397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:25:53.800 [2024-11-20 13:45:05.505408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.800 [2024-11-20 13:45:05.607526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.800 [2024-11-20 13:45:05.607591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:53.800 [2024-11-20 13:45:05.607626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.194 ms 00:25:53.800 [2024-11-20 13:45:05.607642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.800 [2024-11-20 13:45:05.647025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.800 [2024-11-20 13:45:05.647088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:53.800 [2024-11-20 13:45:05.647110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.312 ms 00:25:53.800 [2024-11-20 13:45:05.647122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.800 [2024-11-20 13:45:05.686327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.800 [2024-11-20 13:45:05.686397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:53.800 [2024-11-20 13:45:05.686418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.200 ms 00:25:53.800 [2024-11-20 13:45:05.686429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.800 [2024-11-20 13:45:05.726559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.800 [2024-11-20 13:45:05.726635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:53.800 [2024-11-20 13:45:05.726659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.118 ms 00:25:53.800 [2024-11-20 13:45:05.726671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.800 [2024-11-20 13:45:05.726747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.800 [2024-11-20 13:45:05.726762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:53.800 [2024-11-20 13:45:05.726788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:53.800 [2024-11-20 13:45:05.726804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.800 [2024-11-20 13:45:05.727004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.800 [2024-11-20 13:45:05.727022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:53.800 [2024-11-20 13:45:05.727038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:53.800 [2024-11-20 13:45:05.727049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.800 [2024-11-20 13:45:05.728330] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4842.929 ms, result 0 00:25:53.800 { 00:25:53.800 "name": "ftl0", 00:25:53.800 "uuid": "f8813801-64f0-401e-8328-13cb255c0593" 00:25:53.800 } 00:25:54.059 13:45:05 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:25:54.059 13:45:05 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:25:54.059 13:45:05 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:54.059 13:45:05 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:25:54.059 13:45:05 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:54.059 13:45:05 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:54.059 13:45:05 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:54.059 13:45:05 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:25:54.317 [ 00:25:54.317 { 00:25:54.317 "name": "ftl0", 00:25:54.317 "aliases": [ 00:25:54.317 "f8813801-64f0-401e-8328-13cb255c0593" 00:25:54.317 ], 00:25:54.317 "product_name": "FTL disk", 00:25:54.317 "block_size": 4096, 00:25:54.317 "num_blocks": 20971520, 00:25:54.317 "uuid": "f8813801-64f0-401e-8328-13cb255c0593", 00:25:54.317 "assigned_rate_limits": { 00:25:54.317 "rw_ios_per_sec": 0, 00:25:54.317 "rw_mbytes_per_sec": 0, 00:25:54.317 "r_mbytes_per_sec": 0, 00:25:54.317 "w_mbytes_per_sec": 0 00:25:54.317 }, 00:25:54.317 "claimed": false, 00:25:54.317 "zoned": false, 00:25:54.317 "supported_io_types": { 00:25:54.317 "read": true, 00:25:54.317 "write": true, 00:25:54.317 "unmap": true, 00:25:54.317 "flush": true, 00:25:54.317 "reset": false, 00:25:54.317 "nvme_admin": false, 00:25:54.317 "nvme_io": false, 00:25:54.317 "nvme_io_md": false, 00:25:54.317 "write_zeroes": true, 00:25:54.317 "zcopy": false, 00:25:54.317 "get_zone_info": false, 00:25:54.317 "zone_management": false, 00:25:54.317 "zone_append": false, 00:25:54.317 "compare": false, 00:25:54.317 "compare_and_write": false, 00:25:54.317 "abort": false, 00:25:54.317 "seek_hole": false, 00:25:54.317 "seek_data": false, 00:25:54.317 "copy": false, 00:25:54.317 "nvme_iov_md": false 00:25:54.317 }, 00:25:54.317 "driver_specific": { 00:25:54.317 "ftl": { 00:25:54.317 "base_bdev": "b51b6846-20e8-45fc-b065-7b9b8f217cf8", 00:25:54.317 "cache": "nvc0n1p0" 00:25:54.317 } 00:25:54.317 } 00:25:54.317 } 00:25:54.317 ] 00:25:54.317 13:45:06 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:25:54.317 13:45:06 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:25:54.317 13:45:06 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:54.576 13:45:06 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:25:54.576 13:45:06 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:54.835 [2024-11-20 13:45:06.611681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-11-20 13:45:06.611762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:54.835 [2024-11-20 13:45:06.611780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:54.835 [2024-11-20 13:45:06.611794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.835 [2024-11-20 13:45:06.611835] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:54.835 [2024-11-20 13:45:06.616356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-11-20 13:45:06.616392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:54.835 [2024-11-20 13:45:06.616414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.500 ms 00:25:54.835 [2024-11-20 13:45:06.616425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.835 [2024-11-20 13:45:06.616964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-11-20 13:45:06.616986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:54.835 [2024-11-20 13:45:06.617003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:25:54.835 [2024-11-20 13:45:06.617014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.835 [2024-11-20 13:45:06.619918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-11-20 13:45:06.619943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:54.835 [2024-11-20 13:45:06.619958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.880 ms 00:25:54.835 [2024-11-20 13:45:06.619969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.835 [2024-11-20 13:45:06.625323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-11-20 13:45:06.625361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:54.835 [2024-11-20 13:45:06.625376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.327 ms 00:25:54.835 [2024-11-20 13:45:06.625387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.835 [2024-11-20 13:45:06.665165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-11-20 13:45:06.665234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:54.835 [2024-11-20 13:45:06.665256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.743 ms 00:25:54.835 [2024-11-20 13:45:06.665267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.835 [2024-11-20 13:45:06.689343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-11-20 13:45:06.689417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:54.835 [2024-11-20 13:45:06.689446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.998 ms 00:25:54.835 [2024-11-20 13:45:06.689459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.835 [2024-11-20 13:45:06.689904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-11-20 13:45:06.689934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:54.835 [2024-11-20 13:45:06.689951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:25:54.835 [2024-11-20 13:45:06.689963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.835 [2024-11-20 13:45:06.730457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-11-20 13:45:06.730543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:54.835 [2024-11-20 13:45:06.730567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.520 ms 00:25:54.835 [2024-11-20 13:45:06.730579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.835 [2024-11-20 13:45:06.770882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-11-20 13:45:06.770947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:54.835 [2024-11-20 13:45:06.770969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.273 ms 00:25:54.835 [2024-11-20 13:45:06.770981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.094 [2024-11-20 13:45:06.809220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.094 [2024-11-20 13:45:06.809286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:55.094 [2024-11-20 13:45:06.809308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.213 ms 00:25:55.094 [2024-11-20 13:45:06.809318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.094 [2024-11-20 13:45:06.845965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.094 [2024-11-20 13:45:06.846026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:55.094 [2024-11-20 13:45:06.846046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.539 ms 00:25:55.094 [2024-11-20 13:45:06.846057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.094 [2024-11-20 13:45:06.846123] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:55.094 [2024-11-20 13:45:06.846142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:55.094 [2024-11-20 13:45:06.846517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.846995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:55.095 [2024-11-20 13:45:06.847549] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:55.095 [2024-11-20 13:45:06.847574] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f8813801-64f0-401e-8328-13cb255c0593 00:25:55.095 [2024-11-20 13:45:06.847586] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:55.095 [2024-11-20 13:45:06.847602] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:55.095 [2024-11-20 13:45:06.847621] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:55.095 [2024-11-20 13:45:06.847639] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:55.095 [2024-11-20 13:45:06.847650] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:55.095 [2024-11-20 13:45:06.847663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:55.095 [2024-11-20 13:45:06.847674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:55.095 [2024-11-20 13:45:06.847685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:55.095 [2024-11-20 13:45:06.847695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:55.095 [2024-11-20 13:45:06.847707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.095 [2024-11-20 13:45:06.847718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:55.095 [2024-11-20 13:45:06.847733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.597 ms 00:25:55.095 [2024-11-20 13:45:06.847743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.095 [2024-11-20 13:45:06.868223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.095 [2024-11-20 13:45:06.868283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:55.095 [2024-11-20 13:45:06.868303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.428 ms 00:25:55.095 [2024-11-20 13:45:06.868314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.096 [2024-11-20 13:45:06.868917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.096 [2024-11-20 13:45:06.868936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:55.096 [2024-11-20 13:45:06.868951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:25:55.096 [2024-11-20 13:45:06.868962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.096 [2024-11-20 13:45:06.941650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.096 [2024-11-20 13:45:06.941726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:55.096 [2024-11-20 13:45:06.941747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.096 [2024-11-20 13:45:06.941759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.096 [2024-11-20 13:45:06.941856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.096 [2024-11-20 13:45:06.941867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:55.096 [2024-11-20 13:45:06.941881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.096 [2024-11-20 13:45:06.941892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.096 [2024-11-20 13:45:06.942058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.096 [2024-11-20 13:45:06.942076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:55.096 [2024-11-20 13:45:06.942090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.096 [2024-11-20 13:45:06.942100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.096 [2024-11-20 13:45:06.942137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.096 [2024-11-20 13:45:06.942148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:55.096 [2024-11-20 13:45:06.942161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.096 [2024-11-20 13:45:06.942180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.354 [2024-11-20 13:45:07.078932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.354 [2024-11-20 13:45:07.079001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:55.354 [2024-11-20 13:45:07.079036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.354 [2024-11-20 13:45:07.079048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.354 [2024-11-20 13:45:07.186059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.354 [2024-11-20 13:45:07.186134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:55.354 [2024-11-20 13:45:07.186155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.354 [2024-11-20 13:45:07.186175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.354 [2024-11-20 13:45:07.186340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.354 [2024-11-20 13:45:07.186355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:55.354 [2024-11-20 13:45:07.186374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.354 [2024-11-20 13:45:07.186386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.354 [2024-11-20 13:45:07.186487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.354 [2024-11-20 13:45:07.186500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:55.354 [2024-11-20 13:45:07.186515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.355 [2024-11-20 13:45:07.186526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.355 [2024-11-20 13:45:07.186703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.355 [2024-11-20 13:45:07.186719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:55.355 [2024-11-20 13:45:07.186734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.355 [2024-11-20 13:45:07.186748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.355 [2024-11-20 13:45:07.186824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.355 [2024-11-20 13:45:07.186837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:55.355 [2024-11-20 13:45:07.186852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.355 [2024-11-20 13:45:07.186863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.355 [2024-11-20 13:45:07.186919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.355 [2024-11-20 13:45:07.186930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:55.355 [2024-11-20 13:45:07.186945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.355 [2024-11-20 13:45:07.186957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.355 [2024-11-20 13:45:07.187026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.355 [2024-11-20 13:45:07.187039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:55.355 [2024-11-20 13:45:07.187053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.355 [2024-11-20 13:45:07.187063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.355 [2024-11-20 13:45:07.187255] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 576.476 ms, result 0 00:25:55.355 true 00:25:55.355 13:45:07 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77137 00:25:55.355 13:45:07 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77137 ']' 00:25:55.355 13:45:07 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77137 00:25:55.355 13:45:07 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:25:55.355 13:45:07 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:55.355 13:45:07 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77137 00:25:55.355 13:45:07 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:55.355 13:45:07 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:55.355 killing process with pid 77137 00:25:55.355 13:45:07 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77137' 00:25:55.355 13:45:07 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77137 00:25:55.355 13:45:07 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77137 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:00.653 13:45:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:26:00.653 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:26:00.653 fio-3.35 00:26:00.653 Starting 1 thread 00:26:05.919 00:26:05.919 test: (groupid=0, jobs=1): err= 0: pid=77367: Wed Nov 20 13:45:17 2024 00:26:05.919 read: IOPS=1021, BW=67.9MiB/s (71.2MB/s)(255MiB/3751msec) 00:26:05.919 slat (usec): min=4, max=144, avg= 7.60, stdev= 4.02 00:26:05.919 clat (usec): min=306, max=1018, avg=438.48, stdev=55.89 00:26:05.919 lat (usec): min=318, max=1025, avg=446.08, stdev=56.59 00:26:05.919 clat percentiles (usec): 00:26:05.919 | 1.00th=[ 334], 5.00th=[ 355], 10.00th=[ 363], 20.00th=[ 400], 00:26:05.919 | 30.00th=[ 412], 40.00th=[ 424], 50.00th=[ 433], 60.00th=[ 445], 00:26:05.919 | 70.00th=[ 465], 80.00th=[ 482], 90.00th=[ 506], 95.00th=[ 537], 00:26:05.919 | 99.00th=[ 594], 99.50th=[ 627], 99.90th=[ 725], 99.95th=[ 807], 00:26:05.919 | 99.99th=[ 1020] 00:26:05.919 write: IOPS=1029, BW=68.3MiB/s (71.7MB/s)(256MiB/3747msec); 0 zone resets 00:26:05.919 slat (usec): min=15, max=274, avg=22.25, stdev= 7.60 00:26:05.919 clat (usec): min=341, max=1391, avg=494.75, stdev=67.04 00:26:05.919 lat (usec): min=369, max=1409, avg=516.99, stdev=67.60 00:26:05.919 clat percentiles (usec): 00:26:05.919 | 1.00th=[ 388], 5.00th=[ 420], 10.00th=[ 429], 20.00th=[ 449], 00:26:05.919 | 30.00th=[ 457], 40.00th=[ 465], 50.00th=[ 482], 60.00th=[ 498], 00:26:05.919 | 70.00th=[ 515], 80.00th=[ 537], 90.00th=[ 578], 95.00th=[ 603], 00:26:05.919 | 99.00th=[ 742], 99.50th=[ 807], 99.90th=[ 922], 99.95th=[ 1172], 00:26:05.919 | 99.99th=[ 1385] 00:26:05.919 bw ( KiB/s): min=66776, max=71264, per=99.97%, avg=69960.00, stdev=1517.13, samples=7 00:26:05.919 iops : min= 982, max= 1048, avg=1028.71, stdev=22.26, samples=7 00:26:05.919 lat (usec) : 500=74.90%, 750=24.65%, 1000=0.40% 00:26:05.919 lat (msec) : 2=0.05% 00:26:05.919 cpu : usr=98.72%, sys=0.16%, ctx=10, majf=0, minf=1170 00:26:05.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.919 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:05.919 00:26:05.919 Run status group 0 (all jobs): 00:26:05.919 READ: bw=67.9MiB/s (71.2MB/s), 67.9MiB/s-67.9MiB/s (71.2MB/s-71.2MB/s), io=255MiB (267MB), run=3751-3751msec 00:26:05.919 WRITE: bw=68.3MiB/s (71.7MB/s), 68.3MiB/s-68.3MiB/s (71.7MB/s-71.7MB/s), io=256MiB (269MB), run=3747-3747msec 00:26:07.821 ----------------------------------------------------- 00:26:07.821 Suppressions used: 00:26:07.821 count bytes template 00:26:07.821 1 5 /usr/src/fio/parse.c 00:26:07.821 1 8 libtcmalloc_minimal.so 00:26:07.821 1 904 libcrypto.so 00:26:07.821 ----------------------------------------------------- 00:26:07.821 00:26:07.821 13:45:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:26:07.821 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:07.821 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:07.821 13:45:19 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:26:07.821 13:45:19 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:26:07.821 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:07.821 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:07.821 13:45:19 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:26:07.821 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:26:07.821 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:07.821 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:07.821 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:07.821 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:07.821 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:26:07.822 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:07.822 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:07.822 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:07.822 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:26:07.822 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:08.081 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:08.081 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:08.081 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:26:08.081 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:08.081 13:45:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:26:08.081 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:26:08.081 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:26:08.081 fio-3.35 00:26:08.081 Starting 2 threads 00:26:46.788 00:26:46.788 first_half: (groupid=0, jobs=1): err= 0: pid=77472: Wed Nov 20 13:45:52 2024 00:26:46.788 read: IOPS=2105, BW=8423KiB/s (8626kB/s)(255MiB/30982msec) 00:26:46.788 slat (nsec): min=3825, max=84512, avg=9313.69, stdev=4189.46 00:26:46.788 clat (usec): min=1011, max=329270, avg=44974.68, stdev=25584.39 00:26:46.788 lat (usec): min=1020, max=329286, avg=44984.00, stdev=25585.20 00:26:46.788 clat percentiles (msec): 00:26:46.788 | 1.00th=[ 13], 5.00th=[ 30], 10.00th=[ 37], 20.00th=[ 38], 00:26:46.788 | 30.00th=[ 39], 40.00th=[ 40], 50.00th=[ 44], 60.00th=[ 45], 00:26:46.788 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 48], 95.00th=[ 53], 00:26:46.788 | 99.00th=[ 203], 99.50th=[ 226], 99.90th=[ 275], 99.95th=[ 292], 00:26:46.788 | 99.99th=[ 321] 00:26:46.788 write: IOPS=2508, BW=9.80MiB/s (10.3MB/s)(256MiB/26121msec); 0 zone resets 00:26:46.788 slat (usec): min=4, max=945, avg=11.51, stdev= 8.45 00:26:46.788 clat (usec): min=471, max=128696, avg=15676.91, stdev=26198.14 00:26:46.788 lat (usec): min=497, max=128710, avg=15688.41, stdev=26198.98 00:26:46.788 clat percentiles (usec): 00:26:46.788 | 1.00th=[ 1156], 5.00th=[ 1516], 10.00th=[ 1827], 20.00th=[ 2311], 00:26:46.788 | 30.00th=[ 4228], 40.00th=[ 6063], 50.00th=[ 7439], 60.00th=[ 8717], 00:26:46.788 | 70.00th=[ 10159], 80.00th=[ 12780], 90.00th=[ 44827], 95.00th=[ 92799], 00:26:46.788 | 99.00th=[106431], 99.50th=[111674], 99.90th=[123208], 99.95th=[127402], 00:26:46.788 | 99.99th=[128451] 00:26:46.788 bw ( KiB/s): min= 920, max=40304, per=90.08%, avg=18081.45, stdev=11334.81, samples=29 00:26:46.788 iops : min= 230, max=10076, avg=4520.34, stdev=2833.67, samples=29 00:26:46.788 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.14% 00:26:46.788 lat (msec) : 2=6.79%, 4=7.67%, 10=20.42%, 20=11.23%, 50=45.90% 00:26:46.788 lat (msec) : 100=5.54%, 250=2.17%, 500=0.12% 00:26:46.788 cpu : usr=99.21%, sys=0.17%, ctx=68, majf=0, minf=5550 00:26:46.788 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:46.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.788 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:46.788 issued rwts: total=65243,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:46.788 second_half: (groupid=0, jobs=1): err= 0: pid=77473: Wed Nov 20 13:45:52 2024 00:26:46.788 read: IOPS=2121, BW=8488KiB/s (8691kB/s)(254MiB/30700msec) 00:26:46.788 slat (usec): min=3, max=112, avg=10.87, stdev= 4.50 00:26:46.788 clat (usec): min=963, max=304654, avg=45910.75, stdev=23948.86 00:26:46.788 lat (usec): min=981, max=304669, avg=45921.62, stdev=23949.11 00:26:46.788 clat percentiles (msec): 00:26:46.789 | 1.00th=[ 7], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 38], 00:26:46.789 | 30.00th=[ 39], 40.00th=[ 41], 50.00th=[ 44], 60.00th=[ 45], 00:26:46.789 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 48], 95.00th=[ 58], 00:26:46.789 | 99.00th=[ 176], 99.50th=[ 203], 99.90th=[ 245], 99.95th=[ 257], 00:26:46.789 | 99.99th=[ 296] 00:26:46.789 write: IOPS=3401, BW=13.3MiB/s (13.9MB/s)(256MiB/19269msec); 0 zone resets 00:26:46.789 slat (usec): min=4, max=702, avg=12.61, stdev= 8.70 00:26:46.789 clat (usec): min=488, max=128753, avg=14276.65, stdev=25349.13 00:26:46.789 lat (usec): min=513, max=128767, avg=14289.26, stdev=25349.45 00:26:46.789 clat percentiles (usec): 00:26:46.789 | 1.00th=[ 1254], 5.00th=[ 1647], 10.00th=[ 1860], 20.00th=[ 2147], 00:26:46.789 | 30.00th=[ 2573], 40.00th=[ 4424], 50.00th=[ 6652], 60.00th=[ 8225], 00:26:46.789 | 70.00th=[ 10159], 80.00th=[ 12780], 90.00th=[ 16581], 95.00th=[ 91751], 00:26:46.789 | 99.00th=[105382], 99.50th=[110625], 99.90th=[120062], 99.95th=[124257], 00:26:46.789 | 99.99th=[126354] 00:26:46.789 bw ( KiB/s): min= 488, max=37664, per=96.74%, avg=19418.07, stdev=11206.16, samples=27 00:26:46.789 iops : min= 122, max= 9416, avg=4854.52, stdev=2801.54, samples=27 00:26:46.789 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.08% 00:26:46.789 lat (msec) : 2=7.32%, 4=11.99%, 10=16.16%, 20=10.82%, 50=45.33% 00:26:46.789 lat (msec) : 100=5.86%, 250=2.37%, 500=0.04% 00:26:46.789 cpu : usr=99.11%, sys=0.22%, ctx=47, majf=0, minf=5569 00:26:46.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:46.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.789 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:46.789 issued rwts: total=65142,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:46.789 00:26:46.789 Run status group 0 (all jobs): 00:26:46.789 READ: bw=16.4MiB/s (17.2MB/s), 8423KiB/s-8488KiB/s (8626kB/s-8691kB/s), io=509MiB (534MB), run=30700-30982msec 00:26:46.789 WRITE: bw=19.6MiB/s (20.6MB/s), 9.80MiB/s-13.3MiB/s (10.3MB/s-13.9MB/s), io=512MiB (537MB), run=19269-26121msec 00:26:46.789 ----------------------------------------------------- 00:26:46.789 Suppressions used: 00:26:46.789 count bytes template 00:26:46.789 2 10 /usr/src/fio/parse.c 00:26:46.789 3 288 /usr/src/fio/iolog.c 00:26:46.789 1 8 libtcmalloc_minimal.so 00:26:46.789 1 904 libcrypto.so 00:26:46.789 ----------------------------------------------------- 00:26:46.789 00:26:46.789 13:45:54 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:26:46.789 13:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:46.789 13:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:46.789 13:45:54 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:26:46.789 13:45:54 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:26:46.789 13:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:46.789 13:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:46.789 13:45:54 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:26:46.789 13:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:26:46.789 13:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:46.789 13:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:46.789 13:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:46.789 13:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:46.789 13:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:26:46.789 13:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:46.789 13:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:46.789 13:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:46.789 13:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:26:46.789 13:45:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:46.789 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:46.789 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:46.789 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:26:46.789 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:46.789 13:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:26:46.789 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:26:46.789 fio-3.35 00:26:46.789 Starting 1 thread 00:26:59.013 00:26:59.013 test: (groupid=0, jobs=1): err= 0: pid=77863: Wed Nov 20 13:46:10 2024 00:26:59.013 read: IOPS=7484, BW=29.2MiB/s (30.7MB/s)(255MiB/8712msec) 00:26:59.013 slat (nsec): min=3513, max=36482, avg=5588.80, stdev=1879.64 00:26:59.013 clat (usec): min=709, max=35722, avg=17092.36, stdev=1388.28 00:26:59.013 lat (usec): min=713, max=35729, avg=17097.95, stdev=1388.27 00:26:59.013 clat percentiles (usec): 00:26:59.013 | 1.00th=[15664], 5.00th=[16057], 10.00th=[16188], 20.00th=[16319], 00:26:59.013 | 30.00th=[16581], 40.00th=[16712], 50.00th=[16909], 60.00th=[17171], 00:26:59.013 | 70.00th=[17171], 80.00th=[17695], 90.00th=[18220], 95.00th=[18744], 00:26:59.013 | 99.00th=[20055], 99.50th=[26608], 99.90th=[33162], 99.95th=[33424], 00:26:59.013 | 99.99th=[34866] 00:26:59.013 write: IOPS=12.6k, BW=49.3MiB/s (51.7MB/s)(256MiB/5195msec); 0 zone resets 00:26:59.013 slat (usec): min=4, max=1522, avg= 8.40, stdev=10.15 00:26:59.013 clat (usec): min=594, max=57849, avg=10097.82, stdev=11961.54 00:26:59.013 lat (usec): min=600, max=57857, avg=10106.22, stdev=11961.53 00:26:59.013 clat percentiles (usec): 00:26:59.013 | 1.00th=[ 963], 5.00th=[ 1156], 10.00th=[ 1303], 20.00th=[ 1500], 00:26:59.013 | 30.00th=[ 1696], 40.00th=[ 2057], 50.00th=[ 6587], 60.00th=[ 7898], 00:26:59.013 | 70.00th=[ 9634], 80.00th=[12518], 90.00th=[35390], 95.00th=[36963], 00:26:59.013 | 99.00th=[39584], 99.50th=[41681], 99.90th=[53740], 99.95th=[54789], 00:26:59.013 | 99.99th=[56361] 00:26:59.013 bw ( KiB/s): min=16744, max=71568, per=94.45%, avg=47662.55, stdev=13547.15, samples=11 00:26:59.013 iops : min= 4186, max=17892, avg=11915.64, stdev=3386.79, samples=11 00:26:59.013 lat (usec) : 750=0.02%, 1000=0.70% 00:26:59.013 lat (msec) : 2=19.01%, 4=1.37%, 10=15.02%, 20=55.29%, 50=8.51% 00:26:59.013 lat (msec) : 100=0.08% 00:26:59.013 cpu : usr=98.86%, sys=0.42%, ctx=21, majf=0, minf=5565 00:26:59.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:59.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.013 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:59.013 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.013 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:59.013 00:26:59.013 Run status group 0 (all jobs): 00:26:59.013 READ: bw=29.2MiB/s (30.7MB/s), 29.2MiB/s-29.2MiB/s (30.7MB/s-30.7MB/s), io=255MiB (267MB), run=8712-8712msec 00:26:59.013 WRITE: bw=49.3MiB/s (51.7MB/s), 49.3MiB/s-49.3MiB/s (51.7MB/s-51.7MB/s), io=256MiB (268MB), run=5195-5195msec 00:27:00.914 ----------------------------------------------------- 00:27:00.914 Suppressions used: 00:27:00.914 count bytes template 00:27:00.914 1 5 /usr/src/fio/parse.c 00:27:00.914 2 192 /usr/src/fio/iolog.c 00:27:00.914 1 8 libtcmalloc_minimal.so 00:27:00.914 1 904 libcrypto.so 00:27:00.914 ----------------------------------------------------- 00:27:00.914 00:27:00.914 13:46:12 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:27:00.914 13:46:12 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:00.914 13:46:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:00.914 13:46:12 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:00.914 Remove shared memory files 00:27:00.914 13:46:12 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:27:00.914 13:46:12 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:00.914 13:46:12 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:27:00.914 13:46:12 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:27:00.914 13:46:12 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57955 /dev/shm/spdk_tgt_trace.pid76035 00:27:00.914 13:46:12 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:00.914 13:46:12 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:27:00.914 ************************************ 00:27:00.914 END TEST ftl_fio_basic 00:27:00.914 ************************************ 00:27:00.914 00:27:00.914 real 1m16.382s 00:27:00.914 user 2m50.568s 00:27:00.914 sys 0m4.234s 00:27:00.914 13:46:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:00.914 13:46:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:00.914 13:46:12 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:27:00.914 13:46:12 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:00.914 13:46:12 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:00.914 13:46:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:00.914 ************************************ 00:27:00.914 START TEST ftl_bdevperf 00:27:00.914 ************************************ 00:27:00.915 13:46:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:27:00.915 * Looking for test storage... 00:27:00.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:00.915 13:46:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:01.175 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:01.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.176 --rc genhtml_branch_coverage=1 00:27:01.176 --rc genhtml_function_coverage=1 00:27:01.176 --rc genhtml_legend=1 00:27:01.176 --rc geninfo_all_blocks=1 00:27:01.176 --rc geninfo_unexecuted_blocks=1 00:27:01.176 00:27:01.176 ' 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:01.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.176 --rc genhtml_branch_coverage=1 00:27:01.176 --rc genhtml_function_coverage=1 00:27:01.176 --rc genhtml_legend=1 00:27:01.176 --rc geninfo_all_blocks=1 00:27:01.176 --rc geninfo_unexecuted_blocks=1 00:27:01.176 00:27:01.176 ' 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:01.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.176 --rc genhtml_branch_coverage=1 00:27:01.176 --rc genhtml_function_coverage=1 00:27:01.176 --rc genhtml_legend=1 00:27:01.176 --rc geninfo_all_blocks=1 00:27:01.176 --rc geninfo_unexecuted_blocks=1 00:27:01.176 00:27:01.176 ' 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:01.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.176 --rc genhtml_branch_coverage=1 00:27:01.176 --rc genhtml_function_coverage=1 00:27:01.176 --rc genhtml_legend=1 00:27:01.176 --rc geninfo_all_blocks=1 00:27:01.176 --rc geninfo_unexecuted_blocks=1 00:27:01.176 00:27:01.176 ' 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:27:01.176 13:46:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:01.176 13:46:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:27:01.176 13:46:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78106 00:27:01.176 13:46:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:27:01.176 13:46:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:27:01.176 13:46:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78106 00:27:01.176 13:46:13 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78106 ']' 00:27:01.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.176 13:46:13 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.176 13:46:13 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:01.176 13:46:13 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.176 13:46:13 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:01.176 13:46:13 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:01.176 [2024-11-20 13:46:13.101025] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:27:01.176 [2024-11-20 13:46:13.101332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78106 ] 00:27:01.440 [2024-11-20 13:46:13.276431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.440 [2024-11-20 13:46:13.392959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.024 13:46:13 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:02.024 13:46:13 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:02.024 13:46:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:02.024 13:46:13 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:27:02.024 13:46:13 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:02.024 13:46:13 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:27:02.024 13:46:13 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:27:02.024 13:46:13 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:02.610 13:46:14 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:02.610 13:46:14 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:27:02.610 13:46:14 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:02.610 13:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:02.610 13:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:02.610 13:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:27:02.610 13:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:27:02.610 13:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:02.610 13:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:02.610 { 00:27:02.610 "name": "nvme0n1", 00:27:02.610 "aliases": [ 00:27:02.610 "5a6f7539-b666-4ee8-bef3-934302cf5842" 00:27:02.610 ], 00:27:02.610 "product_name": "NVMe disk", 00:27:02.610 "block_size": 4096, 00:27:02.610 "num_blocks": 1310720, 00:27:02.610 "uuid": "5a6f7539-b666-4ee8-bef3-934302cf5842", 00:27:02.610 "numa_id": -1, 00:27:02.610 "assigned_rate_limits": { 00:27:02.610 "rw_ios_per_sec": 0, 00:27:02.610 "rw_mbytes_per_sec": 0, 00:27:02.610 "r_mbytes_per_sec": 0, 00:27:02.610 "w_mbytes_per_sec": 0 00:27:02.610 }, 00:27:02.610 "claimed": true, 00:27:02.610 "claim_type": "read_many_write_one", 00:27:02.610 "zoned": false, 00:27:02.610 "supported_io_types": { 00:27:02.610 "read": true, 00:27:02.610 "write": true, 00:27:02.610 "unmap": true, 00:27:02.610 "flush": true, 00:27:02.610 "reset": true, 00:27:02.610 "nvme_admin": true, 00:27:02.610 "nvme_io": true, 00:27:02.610 "nvme_io_md": false, 00:27:02.610 "write_zeroes": true, 00:27:02.610 "zcopy": false, 00:27:02.610 "get_zone_info": false, 00:27:02.610 "zone_management": false, 00:27:02.610 "zone_append": false, 00:27:02.610 "compare": true, 00:27:02.610 "compare_and_write": false, 00:27:02.610 "abort": true, 00:27:02.610 "seek_hole": false, 00:27:02.610 "seek_data": false, 00:27:02.610 "copy": true, 00:27:02.610 "nvme_iov_md": false 00:27:02.610 }, 00:27:02.610 "driver_specific": { 00:27:02.610 "nvme": [ 00:27:02.610 { 00:27:02.610 "pci_address": "0000:00:11.0", 00:27:02.610 "trid": { 00:27:02.610 "trtype": "PCIe", 00:27:02.610 "traddr": "0000:00:11.0" 00:27:02.610 }, 00:27:02.610 "ctrlr_data": { 00:27:02.610 "cntlid": 0, 00:27:02.610 "vendor_id": "0x1b36", 00:27:02.611 "model_number": "QEMU NVMe Ctrl", 00:27:02.611 "serial_number": "12341", 00:27:02.611 "firmware_revision": "8.0.0", 00:27:02.611 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:02.611 "oacs": { 00:27:02.611 "security": 0, 00:27:02.611 "format": 1, 00:27:02.611 "firmware": 0, 00:27:02.611 "ns_manage": 1 00:27:02.611 }, 00:27:02.611 "multi_ctrlr": false, 00:27:02.611 "ana_reporting": false 00:27:02.611 }, 00:27:02.611 "vs": { 00:27:02.611 "nvme_version": "1.4" 00:27:02.611 }, 00:27:02.611 "ns_data": { 00:27:02.611 "id": 1, 00:27:02.611 "can_share": false 00:27:02.611 } 00:27:02.611 } 00:27:02.611 ], 00:27:02.611 "mp_policy": "active_passive" 00:27:02.611 } 00:27:02.611 } 00:27:02.611 ]' 00:27:02.611 13:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:02.611 13:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:27:02.611 13:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:02.883 13:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:02.883 13:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:02.883 13:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:27:02.883 13:46:14 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:27:02.883 13:46:14 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:02.883 13:46:14 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:27:02.883 13:46:14 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:02.883 13:46:14 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:02.883 13:46:14 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=3b0a0fa7-2ef0-4159-b2bc-94408c9d7e7b 00:27:02.883 13:46:14 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:27:02.883 13:46:14 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3b0a0fa7-2ef0-4159-b2bc-94408c9d7e7b 00:27:03.157 13:46:15 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:03.420 13:46:15 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=abd89071-0832-42c2-ab85-0c8dfa3efaa4 00:27:03.420 13:46:15 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u abd89071-0832-42c2-ab85-0c8dfa3efaa4 00:27:03.679 13:46:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=8bc6119c-b945-4d70-89f7-e64fc4debf07 00:27:03.679 13:46:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8bc6119c-b945-4d70-89f7-e64fc4debf07 00:27:03.679 13:46:15 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:27:03.679 13:46:15 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:03.679 13:46:15 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=8bc6119c-b945-4d70-89f7-e64fc4debf07 00:27:03.679 13:46:15 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:27:03.679 13:46:15 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 8bc6119c-b945-4d70-89f7-e64fc4debf07 00:27:03.679 13:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=8bc6119c-b945-4d70-89f7-e64fc4debf07 00:27:03.679 13:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:03.679 13:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:27:03.679 13:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:27:03.679 13:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8bc6119c-b945-4d70-89f7-e64fc4debf07 00:27:03.938 13:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:03.938 { 00:27:03.938 "name": "8bc6119c-b945-4d70-89f7-e64fc4debf07", 00:27:03.938 "aliases": [ 00:27:03.938 "lvs/nvme0n1p0" 00:27:03.939 ], 00:27:03.939 "product_name": "Logical Volume", 00:27:03.939 "block_size": 4096, 00:27:03.939 "num_blocks": 26476544, 00:27:03.939 "uuid": "8bc6119c-b945-4d70-89f7-e64fc4debf07", 00:27:03.939 "assigned_rate_limits": { 00:27:03.939 "rw_ios_per_sec": 0, 00:27:03.939 "rw_mbytes_per_sec": 0, 00:27:03.939 "r_mbytes_per_sec": 0, 00:27:03.939 "w_mbytes_per_sec": 0 00:27:03.939 }, 00:27:03.939 "claimed": false, 00:27:03.939 "zoned": false, 00:27:03.939 "supported_io_types": { 00:27:03.939 "read": true, 00:27:03.939 "write": true, 00:27:03.939 "unmap": true, 00:27:03.939 "flush": false, 00:27:03.939 "reset": true, 00:27:03.939 "nvme_admin": false, 00:27:03.939 "nvme_io": false, 00:27:03.939 "nvme_io_md": false, 00:27:03.939 "write_zeroes": true, 00:27:03.939 "zcopy": false, 00:27:03.939 "get_zone_info": false, 00:27:03.939 "zone_management": false, 00:27:03.939 "zone_append": false, 00:27:03.939 "compare": false, 00:27:03.939 "compare_and_write": false, 00:27:03.939 "abort": false, 00:27:03.939 "seek_hole": true, 00:27:03.939 "seek_data": true, 00:27:03.939 "copy": false, 00:27:03.939 "nvme_iov_md": false 00:27:03.939 }, 00:27:03.939 "driver_specific": { 00:27:03.939 "lvol": { 00:27:03.939 "lvol_store_uuid": "abd89071-0832-42c2-ab85-0c8dfa3efaa4", 00:27:03.939 "base_bdev": "nvme0n1", 00:27:03.939 "thin_provision": true, 00:27:03.939 "num_allocated_clusters": 0, 00:27:03.939 "snapshot": false, 00:27:03.939 "clone": false, 00:27:03.939 "esnap_clone": false 00:27:03.939 } 00:27:03.939 } 00:27:03.939 } 00:27:03.939 ]' 00:27:03.939 13:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:03.939 13:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:27:03.939 13:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:03.939 13:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:03.939 13:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:03.939 13:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:27:03.939 13:46:15 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:27:03.939 13:46:15 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:27:03.939 13:46:15 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:04.198 13:46:16 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:04.198 13:46:16 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:04.198 13:46:16 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 8bc6119c-b945-4d70-89f7-e64fc4debf07 00:27:04.198 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=8bc6119c-b945-4d70-89f7-e64fc4debf07 00:27:04.198 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:04.198 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:27:04.198 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:27:04.198 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8bc6119c-b945-4d70-89f7-e64fc4debf07 00:27:04.457 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:04.457 { 00:27:04.457 "name": "8bc6119c-b945-4d70-89f7-e64fc4debf07", 00:27:04.457 "aliases": [ 00:27:04.457 "lvs/nvme0n1p0" 00:27:04.457 ], 00:27:04.457 "product_name": "Logical Volume", 00:27:04.457 "block_size": 4096, 00:27:04.457 "num_blocks": 26476544, 00:27:04.457 "uuid": "8bc6119c-b945-4d70-89f7-e64fc4debf07", 00:27:04.457 "assigned_rate_limits": { 00:27:04.457 "rw_ios_per_sec": 0, 00:27:04.457 "rw_mbytes_per_sec": 0, 00:27:04.457 "r_mbytes_per_sec": 0, 00:27:04.458 "w_mbytes_per_sec": 0 00:27:04.458 }, 00:27:04.458 "claimed": false, 00:27:04.458 "zoned": false, 00:27:04.458 "supported_io_types": { 00:27:04.458 "read": true, 00:27:04.458 "write": true, 00:27:04.458 "unmap": true, 00:27:04.458 "flush": false, 00:27:04.458 "reset": true, 00:27:04.458 "nvme_admin": false, 00:27:04.458 "nvme_io": false, 00:27:04.458 "nvme_io_md": false, 00:27:04.458 "write_zeroes": true, 00:27:04.458 "zcopy": false, 00:27:04.458 "get_zone_info": false, 00:27:04.458 "zone_management": false, 00:27:04.458 "zone_append": false, 00:27:04.458 "compare": false, 00:27:04.458 "compare_and_write": false, 00:27:04.458 "abort": false, 00:27:04.458 "seek_hole": true, 00:27:04.458 "seek_data": true, 00:27:04.458 "copy": false, 00:27:04.458 "nvme_iov_md": false 00:27:04.458 }, 00:27:04.458 "driver_specific": { 00:27:04.458 "lvol": { 00:27:04.458 "lvol_store_uuid": "abd89071-0832-42c2-ab85-0c8dfa3efaa4", 00:27:04.458 "base_bdev": "nvme0n1", 00:27:04.458 "thin_provision": true, 00:27:04.458 "num_allocated_clusters": 0, 00:27:04.458 "snapshot": false, 00:27:04.458 "clone": false, 00:27:04.458 "esnap_clone": false 00:27:04.458 } 00:27:04.458 } 00:27:04.458 } 00:27:04.458 ]' 00:27:04.458 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:04.458 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:27:04.458 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:04.458 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:04.458 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:04.458 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:27:04.458 13:46:16 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:27:04.458 13:46:16 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:04.716 13:46:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:27:04.716 13:46:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 8bc6119c-b945-4d70-89f7-e64fc4debf07 00:27:04.716 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=8bc6119c-b945-4d70-89f7-e64fc4debf07 00:27:04.716 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:04.716 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:27:04.716 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:27:04.716 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8bc6119c-b945-4d70-89f7-e64fc4debf07 00:27:04.975 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:04.975 { 00:27:04.975 "name": "8bc6119c-b945-4d70-89f7-e64fc4debf07", 00:27:04.975 "aliases": [ 00:27:04.975 "lvs/nvme0n1p0" 00:27:04.975 ], 00:27:04.975 "product_name": "Logical Volume", 00:27:04.975 "block_size": 4096, 00:27:04.975 "num_blocks": 26476544, 00:27:04.975 "uuid": "8bc6119c-b945-4d70-89f7-e64fc4debf07", 00:27:04.975 "assigned_rate_limits": { 00:27:04.975 "rw_ios_per_sec": 0, 00:27:04.975 "rw_mbytes_per_sec": 0, 00:27:04.975 "r_mbytes_per_sec": 0, 00:27:04.975 "w_mbytes_per_sec": 0 00:27:04.975 }, 00:27:04.975 "claimed": false, 00:27:04.975 "zoned": false, 00:27:04.975 "supported_io_types": { 00:27:04.975 "read": true, 00:27:04.975 "write": true, 00:27:04.975 "unmap": true, 00:27:04.975 "flush": false, 00:27:04.975 "reset": true, 00:27:04.975 "nvme_admin": false, 00:27:04.975 "nvme_io": false, 00:27:04.975 "nvme_io_md": false, 00:27:04.975 "write_zeroes": true, 00:27:04.975 "zcopy": false, 00:27:04.975 "get_zone_info": false, 00:27:04.975 "zone_management": false, 00:27:04.975 "zone_append": false, 00:27:04.975 "compare": false, 00:27:04.975 "compare_and_write": false, 00:27:04.975 "abort": false, 00:27:04.975 "seek_hole": true, 00:27:04.975 "seek_data": true, 00:27:04.975 "copy": false, 00:27:04.975 "nvme_iov_md": false 00:27:04.975 }, 00:27:04.975 "driver_specific": { 00:27:04.975 "lvol": { 00:27:04.975 "lvol_store_uuid": "abd89071-0832-42c2-ab85-0c8dfa3efaa4", 00:27:04.975 "base_bdev": "nvme0n1", 00:27:04.975 "thin_provision": true, 00:27:04.975 "num_allocated_clusters": 0, 00:27:04.975 "snapshot": false, 00:27:04.975 "clone": false, 00:27:04.975 "esnap_clone": false 00:27:04.975 } 00:27:04.975 } 00:27:04.975 } 00:27:04.975 ]' 00:27:04.975 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:04.975 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:27:04.975 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:05.235 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:05.235 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:05.235 13:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:27:05.235 13:46:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:27:05.235 13:46:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8bc6119c-b945-4d70-89f7-e64fc4debf07 -c nvc0n1p0 --l2p_dram_limit 20 00:27:05.235 [2024-11-20 13:46:17.138503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.235 [2024-11-20 13:46:17.138565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:05.235 [2024-11-20 13:46:17.138584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:05.235 [2024-11-20 13:46:17.138612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.235 [2024-11-20 13:46:17.138687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.235 [2024-11-20 13:46:17.138706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:05.235 [2024-11-20 13:46:17.138718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:05.235 [2024-11-20 13:46:17.138731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.235 [2024-11-20 13:46:17.138753] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:05.235 [2024-11-20 13:46:17.139918] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:05.235 [2024-11-20 13:46:17.139953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.235 [2024-11-20 13:46:17.139968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:05.235 [2024-11-20 13:46:17.139980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.207 ms 00:27:05.235 [2024-11-20 13:46:17.139993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.235 [2024-11-20 13:46:17.140209] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e45d1976-a901-47ab-8de5-4e7d36f8768f 00:27:05.235 [2024-11-20 13:46:17.141667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.235 [2024-11-20 13:46:17.141703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:05.235 [2024-11-20 13:46:17.141719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:05.235 [2024-11-20 13:46:17.141735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.235 [2024-11-20 13:46:17.149415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.235 [2024-11-20 13:46:17.149448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:05.235 [2024-11-20 13:46:17.149467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.631 ms 00:27:05.235 [2024-11-20 13:46:17.149479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.235 [2024-11-20 13:46:17.149590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.235 [2024-11-20 13:46:17.149615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:05.235 [2024-11-20 13:46:17.149635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:27:05.235 [2024-11-20 13:46:17.149647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.235 [2024-11-20 13:46:17.149715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.235 [2024-11-20 13:46:17.149728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:05.235 [2024-11-20 13:46:17.149741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:05.235 [2024-11-20 13:46:17.149752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.235 [2024-11-20 13:46:17.149781] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:05.235 [2024-11-20 13:46:17.155419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.235 [2024-11-20 13:46:17.155455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:05.235 [2024-11-20 13:46:17.155468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.658 ms 00:27:05.235 [2024-11-20 13:46:17.155487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.235 [2024-11-20 13:46:17.155522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.235 [2024-11-20 13:46:17.155536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:05.235 [2024-11-20 13:46:17.155549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:05.235 [2024-11-20 13:46:17.155561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.235 [2024-11-20 13:46:17.155621] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:05.235 [2024-11-20 13:46:17.155790] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:05.235 [2024-11-20 13:46:17.155806] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:05.235 [2024-11-20 13:46:17.155824] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:05.235 [2024-11-20 13:46:17.155839] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:05.235 [2024-11-20 13:46:17.155855] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:05.235 [2024-11-20 13:46:17.155867] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:05.235 [2024-11-20 13:46:17.155882] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:05.235 [2024-11-20 13:46:17.155892] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:05.235 [2024-11-20 13:46:17.155906] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:05.235 [2024-11-20 13:46:17.155917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.235 [2024-11-20 13:46:17.155934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:05.235 [2024-11-20 13:46:17.155945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:27:05.235 [2024-11-20 13:46:17.155959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.235 [2024-11-20 13:46:17.156036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.235 [2024-11-20 13:46:17.156053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:05.235 [2024-11-20 13:46:17.156065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:05.235 [2024-11-20 13:46:17.156081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.235 [2024-11-20 13:46:17.156169] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:05.235 [2024-11-20 13:46:17.156184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:05.235 [2024-11-20 13:46:17.156199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:05.235 [2024-11-20 13:46:17.156212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:05.235 [2024-11-20 13:46:17.156224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:05.235 [2024-11-20 13:46:17.156237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:05.235 [2024-11-20 13:46:17.156247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:05.235 [2024-11-20 13:46:17.156261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:05.235 [2024-11-20 13:46:17.156271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:05.235 [2024-11-20 13:46:17.156284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:05.235 [2024-11-20 13:46:17.156293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:05.235 [2024-11-20 13:46:17.156306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:05.235 [2024-11-20 13:46:17.156316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:05.235 [2024-11-20 13:46:17.156343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:05.235 [2024-11-20 13:46:17.156354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:05.235 [2024-11-20 13:46:17.156369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:05.235 [2024-11-20 13:46:17.156379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:05.235 [2024-11-20 13:46:17.156392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:05.235 [2024-11-20 13:46:17.156403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:05.235 [2024-11-20 13:46:17.156418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:05.235 [2024-11-20 13:46:17.156428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:05.235 [2024-11-20 13:46:17.156441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:05.235 [2024-11-20 13:46:17.156451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:05.235 [2024-11-20 13:46:17.156464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:05.235 [2024-11-20 13:46:17.156475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:05.235 [2024-11-20 13:46:17.156491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:05.235 [2024-11-20 13:46:17.156501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:05.235 [2024-11-20 13:46:17.156513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:05.235 [2024-11-20 13:46:17.156524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:05.235 [2024-11-20 13:46:17.156536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:05.235 [2024-11-20 13:46:17.156546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:05.235 [2024-11-20 13:46:17.156562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:05.235 [2024-11-20 13:46:17.156572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:05.236 [2024-11-20 13:46:17.156585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:05.236 [2024-11-20 13:46:17.156595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:05.236 [2024-11-20 13:46:17.156608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:05.236 [2024-11-20 13:46:17.156630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:05.236 [2024-11-20 13:46:17.156643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:05.236 [2024-11-20 13:46:17.156653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:05.236 [2024-11-20 13:46:17.156666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:05.236 [2024-11-20 13:46:17.156676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:05.236 [2024-11-20 13:46:17.156689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:05.236 [2024-11-20 13:46:17.156699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:05.236 [2024-11-20 13:46:17.156712] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:05.236 [2024-11-20 13:46:17.156723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:05.236 [2024-11-20 13:46:17.156737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:05.236 [2024-11-20 13:46:17.156748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:05.236 [2024-11-20 13:46:17.156765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:05.236 [2024-11-20 13:46:17.156775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:05.236 [2024-11-20 13:46:17.156788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:05.236 [2024-11-20 13:46:17.156799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:05.236 [2024-11-20 13:46:17.156811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:05.236 [2024-11-20 13:46:17.156822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:05.236 [2024-11-20 13:46:17.156840] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:05.236 [2024-11-20 13:46:17.156855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:05.236 [2024-11-20 13:46:17.156883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:05.236 [2024-11-20 13:46:17.156895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:05.236 [2024-11-20 13:46:17.156911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:05.236 [2024-11-20 13:46:17.156922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:05.236 [2024-11-20 13:46:17.156936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:05.236 [2024-11-20 13:46:17.156947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:05.236 [2024-11-20 13:46:17.156960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:05.236 [2024-11-20 13:46:17.156971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:05.236 [2024-11-20 13:46:17.156988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:05.236 [2024-11-20 13:46:17.157000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:05.236 [2024-11-20 13:46:17.157012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:05.236 [2024-11-20 13:46:17.157023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:05.236 [2024-11-20 13:46:17.157036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:05.236 [2024-11-20 13:46:17.157047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:05.236 [2024-11-20 13:46:17.157060] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:05.236 [2024-11-20 13:46:17.157072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:05.236 [2024-11-20 13:46:17.157088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:05.236 [2024-11-20 13:46:17.157099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:05.236 [2024-11-20 13:46:17.157113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:05.236 [2024-11-20 13:46:17.157123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:05.236 [2024-11-20 13:46:17.157138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.236 [2024-11-20 13:46:17.157152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:05.236 [2024-11-20 13:46:17.157166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.025 ms 00:27:05.236 [2024-11-20 13:46:17.157176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.236 [2024-11-20 13:46:17.157220] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:05.236 [2024-11-20 13:46:17.157234] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:09.422 [2024-11-20 13:46:20.967664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.422 [2024-11-20 13:46:20.967739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:09.422 [2024-11-20 13:46:20.967767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3816.622 ms 00:27:09.422 [2024-11-20 13:46:20.967778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.422 [2024-11-20 13:46:21.007501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.422 [2024-11-20 13:46:21.007563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:09.422 [2024-11-20 13:46:21.007585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.442 ms 00:27:09.422 [2024-11-20 13:46:21.007606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.422 [2024-11-20 13:46:21.007782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.422 [2024-11-20 13:46:21.007802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:09.422 [2024-11-20 13:46:21.007830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:27:09.422 [2024-11-20 13:46:21.007841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.422 [2024-11-20 13:46:21.065004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.422 [2024-11-20 13:46:21.065066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:09.422 [2024-11-20 13:46:21.065105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.201 ms 00:27:09.422 [2024-11-20 13:46:21.065117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.422 [2024-11-20 13:46:21.065177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.422 [2024-11-20 13:46:21.065194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:09.422 [2024-11-20 13:46:21.065208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:09.422 [2024-11-20 13:46:21.065219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.422 [2024-11-20 13:46:21.065796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.422 [2024-11-20 13:46:21.065814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:09.422 [2024-11-20 13:46:21.065829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:27:09.422 [2024-11-20 13:46:21.065840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.422 [2024-11-20 13:46:21.065972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.422 [2024-11-20 13:46:21.065986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:09.422 [2024-11-20 13:46:21.066002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:27:09.422 [2024-11-20 13:46:21.066012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.422 [2024-11-20 13:46:21.086058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.422 [2024-11-20 13:46:21.086115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:09.422 [2024-11-20 13:46:21.086153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.054 ms 00:27:09.422 [2024-11-20 13:46:21.086164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.422 [2024-11-20 13:46:21.099633] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:27:09.422 [2024-11-20 13:46:21.105709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.422 [2024-11-20 13:46:21.105760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:09.422 [2024-11-20 13:46:21.105778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.431 ms 00:27:09.422 [2024-11-20 13:46:21.105791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.422 [2024-11-20 13:46:21.200159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.422 [2024-11-20 13:46:21.200258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:09.422 [2024-11-20 13:46:21.200278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.467 ms 00:27:09.423 [2024-11-20 13:46:21.200292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.423 [2024-11-20 13:46:21.200496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.423 [2024-11-20 13:46:21.200516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:09.423 [2024-11-20 13:46:21.200528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:27:09.423 [2024-11-20 13:46:21.200541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.423 [2024-11-20 13:46:21.239089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.423 [2024-11-20 13:46:21.239167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:09.423 [2024-11-20 13:46:21.239187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.528 ms 00:27:09.423 [2024-11-20 13:46:21.239202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.423 [2024-11-20 13:46:21.278263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.423 [2024-11-20 13:46:21.278326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:09.423 [2024-11-20 13:46:21.278347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.065 ms 00:27:09.423 [2024-11-20 13:46:21.278361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.423 [2024-11-20 13:46:21.279146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.423 [2024-11-20 13:46:21.279177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:09.423 [2024-11-20 13:46:21.279190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:27:09.423 [2024-11-20 13:46:21.279205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.683 [2024-11-20 13:46:21.387821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.683 [2024-11-20 13:46:21.387899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:09.683 [2024-11-20 13:46:21.387918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.687 ms 00:27:09.683 [2024-11-20 13:46:21.387932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.683 [2024-11-20 13:46:21.428124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.683 [2024-11-20 13:46:21.428194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:09.683 [2024-11-20 13:46:21.428218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.120 ms 00:27:09.683 [2024-11-20 13:46:21.428232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.683 [2024-11-20 13:46:21.469546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.683 [2024-11-20 13:46:21.469630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:09.683 [2024-11-20 13:46:21.469651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.318 ms 00:27:09.683 [2024-11-20 13:46:21.469665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.683 [2024-11-20 13:46:21.507858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.683 [2024-11-20 13:46:21.507943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:09.683 [2024-11-20 13:46:21.507963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.197 ms 00:27:09.683 [2024-11-20 13:46:21.507977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.683 [2024-11-20 13:46:21.508035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.683 [2024-11-20 13:46:21.508054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:09.683 [2024-11-20 13:46:21.508066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:09.683 [2024-11-20 13:46:21.508079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.683 [2024-11-20 13:46:21.508206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.683 [2024-11-20 13:46:21.508222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:09.683 [2024-11-20 13:46:21.508233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:09.683 [2024-11-20 13:46:21.508246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.683 [2024-11-20 13:46:21.509371] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4377.437 ms, result 0 00:27:09.683 { 00:27:09.683 "name": "ftl0", 00:27:09.683 "uuid": "e45d1976-a901-47ab-8de5-4e7d36f8768f" 00:27:09.683 } 00:27:09.683 13:46:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:27:09.683 13:46:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:27:09.683 13:46:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:27:09.942 13:46:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:27:09.942 [2024-11-20 13:46:21.853238] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:27:09.942 I/O size of 69632 is greater than zero copy threshold (65536). 00:27:09.942 Zero copy mechanism will not be used. 00:27:09.942 Running I/O for 4 seconds... 00:27:11.903 1581.00 IOPS, 104.99 MiB/s [2024-11-20T13:46:25.236Z] 1660.00 IOPS, 110.23 MiB/s [2024-11-20T13:46:26.170Z] 1645.67 IOPS, 109.28 MiB/s [2024-11-20T13:46:26.170Z] 1677.00 IOPS, 111.36 MiB/s 00:27:14.213 Latency(us) 00:27:14.213 [2024-11-20T13:46:26.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.213 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:27:14.213 ftl0 : 4.00 1676.26 111.31 0.00 0.00 623.96 198.22 6527.28 00:27:14.213 [2024-11-20T13:46:26.170Z] =================================================================================================================== 00:27:14.213 [2024-11-20T13:46:26.170Z] Total : 1676.26 111.31 0.00 0.00 623.96 198.22 6527.28 00:27:14.213 [2024-11-20 13:46:25.859003] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:27:14.213 { 00:27:14.213 "results": [ 00:27:14.213 { 00:27:14.213 "job": "ftl0", 00:27:14.213 "core_mask": "0x1", 00:27:14.213 "workload": "randwrite", 00:27:14.213 "status": "finished", 00:27:14.213 "queue_depth": 1, 00:27:14.213 "io_size": 69632, 00:27:14.213 "runtime": 4.002357, 00:27:14.213 "iops": 1676.2622624618443, 00:27:14.213 "mibps": 111.31429086660685, 00:27:14.213 "io_failed": 0, 00:27:14.213 "io_timeout": 0, 00:27:14.213 "avg_latency_us": 623.9612465662321, 00:27:14.213 "min_latency_us": 198.22008032128514, 00:27:14.213 "max_latency_us": 6527.28032128514 00:27:14.213 } 00:27:14.213 ], 00:27:14.213 "core_count": 1 00:27:14.213 } 00:27:14.213 13:46:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:27:14.213 [2024-11-20 13:46:25.998905] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:27:14.213 Running I/O for 4 seconds... 00:27:16.122 9722.00 IOPS, 37.98 MiB/s [2024-11-20T13:46:29.014Z] 9648.00 IOPS, 37.69 MiB/s [2024-11-20T13:46:30.389Z] 9400.33 IOPS, 36.72 MiB/s [2024-11-20T13:46:30.389Z] 9452.50 IOPS, 36.92 MiB/s 00:27:18.432 Latency(us) 00:27:18.432 [2024-11-20T13:46:30.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.432 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:27:18.432 ftl0 : 4.02 9445.02 36.89 0.00 0.00 13523.67 266.49 33899.75 00:27:18.432 [2024-11-20T13:46:30.389Z] =================================================================================================================== 00:27:18.432 [2024-11-20T13:46:30.389Z] Total : 9445.02 36.89 0.00 0.00 13523.67 0.00 33899.75 00:27:18.432 { 00:27:18.432 "results": [ 00:27:18.432 { 00:27:18.432 "job": "ftl0", 00:27:18.432 "core_mask": "0x1", 00:27:18.432 "workload": "randwrite", 00:27:18.432 "status": "finished", 00:27:18.432 "queue_depth": 128, 00:27:18.432 "io_size": 4096, 00:27:18.432 "runtime": 4.016614, 00:27:18.432 "iops": 9445.020108977362, 00:27:18.432 "mibps": 36.89460980069282, 00:27:18.432 "io_failed": 0, 00:27:18.432 "io_timeout": 0, 00:27:18.432 "avg_latency_us": 13523.665125557454, 00:27:18.432 "min_latency_us": 266.4867469879518, 00:27:18.432 "max_latency_us": 33899.74618473896 00:27:18.432 } 00:27:18.432 ], 00:27:18.432 "core_count": 1 00:27:18.432 } 00:27:18.432 [2024-11-20 13:46:30.021103] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:27:18.432 13:46:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:27:18.432 [2024-11-20 13:46:30.148410] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:27:18.432 Running I/O for 4 seconds... 00:27:20.312 8302.00 IOPS, 32.43 MiB/s [2024-11-20T13:46:33.205Z] 8438.50 IOPS, 32.96 MiB/s [2024-11-20T13:46:34.580Z] 8469.33 IOPS, 33.08 MiB/s [2024-11-20T13:46:34.580Z] 8452.00 IOPS, 33.02 MiB/s 00:27:22.623 Latency(us) 00:27:22.623 [2024-11-20T13:46:34.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.623 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:22.623 Verification LBA range: start 0x0 length 0x1400000 00:27:22.623 ftl0 : 4.01 8461.82 33.05 0.00 0.00 15078.92 276.36 29899.16 00:27:22.623 [2024-11-20T13:46:34.580Z] =================================================================================================================== 00:27:22.623 [2024-11-20T13:46:34.580Z] Total : 8461.82 33.05 0.00 0.00 15078.92 0.00 29899.16 00:27:22.623 [2024-11-20 13:46:34.172077] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:27:22.623 { 00:27:22.623 "results": [ 00:27:22.623 { 00:27:22.623 "job": "ftl0", 00:27:22.623 "core_mask": "0x1", 00:27:22.623 "workload": "verify", 00:27:22.623 "status": "finished", 00:27:22.623 "verify_range": { 00:27:22.623 "start": 0, 00:27:22.623 "length": 20971520 00:27:22.623 }, 00:27:22.623 "queue_depth": 128, 00:27:22.623 "io_size": 4096, 00:27:22.623 "runtime": 4.010487, 00:27:22.623 "iops": 8461.815235905266, 00:27:22.623 "mibps": 33.053965765254944, 00:27:22.623 "io_failed": 0, 00:27:22.623 "io_timeout": 0, 00:27:22.623 "avg_latency_us": 15078.915268050041, 00:27:22.623 "min_latency_us": 276.3566265060241, 00:27:22.623 "max_latency_us": 29899.15502008032 00:27:22.623 } 00:27:22.623 ], 00:27:22.623 "core_count": 1 00:27:22.623 } 00:27:22.623 13:46:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:27:22.623 [2024-11-20 13:46:34.391276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.623 [2024-11-20 13:46:34.391352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:22.623 [2024-11-20 13:46:34.391370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:22.623 [2024-11-20 13:46:34.391384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.623 [2024-11-20 13:46:34.391410] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:22.623 [2024-11-20 13:46:34.395528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.623 [2024-11-20 13:46:34.395562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:22.623 [2024-11-20 13:46:34.395579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.102 ms 00:27:22.623 [2024-11-20 13:46:34.395590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.623 [2024-11-20 13:46:34.397345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.623 [2024-11-20 13:46:34.397516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:22.623 [2024-11-20 13:46:34.397548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.712 ms 00:27:22.623 [2024-11-20 13:46:34.397567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.883 [2024-11-20 13:46:34.608282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.883 [2024-11-20 13:46:34.608552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:22.883 [2024-11-20 13:46:34.608614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 211.010 ms 00:27:22.883 [2024-11-20 13:46:34.608630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.883 [2024-11-20 13:46:34.614024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.883 [2024-11-20 13:46:34.614061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:22.883 [2024-11-20 13:46:34.614078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.345 ms 00:27:22.883 [2024-11-20 13:46:34.614088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.883 [2024-11-20 13:46:34.652028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.883 [2024-11-20 13:46:34.652085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:22.883 [2024-11-20 13:46:34.652107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.916 ms 00:27:22.883 [2024-11-20 13:46:34.652119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.883 [2024-11-20 13:46:34.676578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.883 [2024-11-20 13:46:34.676660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:22.883 [2024-11-20 13:46:34.676682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.439 ms 00:27:22.883 [2024-11-20 13:46:34.676694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.883 [2024-11-20 13:46:34.676909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.883 [2024-11-20 13:46:34.676925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:22.883 [2024-11-20 13:46:34.676944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:27:22.883 [2024-11-20 13:46:34.676955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.883 [2024-11-20 13:46:34.717195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.883 [2024-11-20 13:46:34.717452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:22.883 [2024-11-20 13:46:34.717572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.276 ms 00:27:22.883 [2024-11-20 13:46:34.717638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.883 [2024-11-20 13:46:34.754661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.883 [2024-11-20 13:46:34.754881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:22.883 [2024-11-20 13:46:34.755048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.986 ms 00:27:22.883 [2024-11-20 13:46:34.755086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.883 [2024-11-20 13:46:34.792785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.883 [2024-11-20 13:46:34.793072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:22.883 [2024-11-20 13:46:34.793199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.675 ms 00:27:22.883 [2024-11-20 13:46:34.793219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.883 [2024-11-20 13:46:34.831839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.883 [2024-11-20 13:46:34.832132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:22.883 [2024-11-20 13:46:34.832191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.501 ms 00:27:22.883 [2024-11-20 13:46:34.832205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.883 [2024-11-20 13:46:34.832282] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:22.883 [2024-11-20 13:46:34.832305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:22.883 [2024-11-20 13:46:34.832324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:22.883 [2024-11-20 13:46:34.832337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:22.883 [2024-11-20 13:46:34.832353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:22.883 [2024-11-20 13:46:34.832365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:22.883 [2024-11-20 13:46:34.832381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:22.883 [2024-11-20 13:46:34.832394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:22.883 [2024-11-20 13:46:34.832410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:22.883 [2024-11-20 13:46:34.832423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:22.883 [2024-11-20 13:46:34.832439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:22.883 [2024-11-20 13:46:34.832452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:22.883 [2024-11-20 13:46:34.832468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:22.883 [2024-11-20 13:46:34.832481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:22.883 [2024-11-20 13:46:34.832499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:22.883 [2024-11-20 13:46:34.832512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.832992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:22.884 [2024-11-20 13:46:34.833848] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:22.884 [2024-11-20 13:46:34.833864] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e45d1976-a901-47ab-8de5-4e7d36f8768f 00:27:22.884 [2024-11-20 13:46:34.833877] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:22.884 [2024-11-20 13:46:34.833896] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:22.884 [2024-11-20 13:46:34.833909] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:22.885 [2024-11-20 13:46:34.833924] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:22.885 [2024-11-20 13:46:34.833935] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:22.885 [2024-11-20 13:46:34.833951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:22.885 [2024-11-20 13:46:34.833964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:22.885 [2024-11-20 13:46:34.833982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:22.885 [2024-11-20 13:46:34.833993] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:22.885 [2024-11-20 13:46:34.834008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.885 [2024-11-20 13:46:34.834022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:22.885 [2024-11-20 13:46:34.834039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.731 ms 00:27:22.885 [2024-11-20 13:46:34.834051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.143 [2024-11-20 13:46:34.855552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.143 [2024-11-20 13:46:34.855811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:23.143 [2024-11-20 13:46:34.855857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.447 ms 00:27:23.143 [2024-11-20 13:46:34.855871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.143 [2024-11-20 13:46:34.856427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.143 [2024-11-20 13:46:34.856443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:23.143 [2024-11-20 13:46:34.856458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:27:23.143 [2024-11-20 13:46:34.856468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.143 [2024-11-20 13:46:34.913668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.143 [2024-11-20 13:46:34.913956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:23.143 [2024-11-20 13:46:34.913993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.143 [2024-11-20 13:46:34.914006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.143 [2024-11-20 13:46:34.914098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.143 [2024-11-20 13:46:34.914110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:23.143 [2024-11-20 13:46:34.914124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.143 [2024-11-20 13:46:34.914135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.143 [2024-11-20 13:46:34.914288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.143 [2024-11-20 13:46:34.914304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:23.143 [2024-11-20 13:46:34.914319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.143 [2024-11-20 13:46:34.914330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.143 [2024-11-20 13:46:34.914353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.143 [2024-11-20 13:46:34.914365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:23.143 [2024-11-20 13:46:34.914380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.143 [2024-11-20 13:46:34.914391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.143 [2024-11-20 13:46:35.044891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.143 [2024-11-20 13:46:35.045031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:23.143 [2024-11-20 13:46:35.045058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.143 [2024-11-20 13:46:35.045069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.402 [2024-11-20 13:46:35.153263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.402 [2024-11-20 13:46:35.153343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:23.402 [2024-11-20 13:46:35.153370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.402 [2024-11-20 13:46:35.153381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.402 [2024-11-20 13:46:35.153621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.402 [2024-11-20 13:46:35.153643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:23.402 [2024-11-20 13:46:35.153658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.402 [2024-11-20 13:46:35.153668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.402 [2024-11-20 13:46:35.153733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.402 [2024-11-20 13:46:35.153745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:23.402 [2024-11-20 13:46:35.153759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.402 [2024-11-20 13:46:35.153769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.402 [2024-11-20 13:46:35.153896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.402 [2024-11-20 13:46:35.153910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:23.402 [2024-11-20 13:46:35.153930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.402 [2024-11-20 13:46:35.153940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.402 [2024-11-20 13:46:35.153982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.402 [2024-11-20 13:46:35.153994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:23.402 [2024-11-20 13:46:35.154007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.402 [2024-11-20 13:46:35.154017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.402 [2024-11-20 13:46:35.154082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.402 [2024-11-20 13:46:35.154095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:23.402 [2024-11-20 13:46:35.154116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.402 [2024-11-20 13:46:35.154126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.402 [2024-11-20 13:46:35.154247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:23.402 [2024-11-20 13:46:35.154272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:23.402 [2024-11-20 13:46:35.154287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:23.402 [2024-11-20 13:46:35.154297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.402 [2024-11-20 13:46:35.154509] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 764.400 ms, result 0 00:27:23.402 true 00:27:23.402 13:46:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78106 00:27:23.402 13:46:35 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78106 ']' 00:27:23.402 13:46:35 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78106 00:27:23.402 13:46:35 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:23.402 13:46:35 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:23.402 13:46:35 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78106 00:27:23.402 killing process with pid 78106 00:27:23.402 Received shutdown signal, test time was about 4.000000 seconds 00:27:23.402 00:27:23.402 Latency(us) 00:27:23.402 [2024-11-20T13:46:35.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:23.402 [2024-11-20T13:46:35.359Z] =================================================================================================================== 00:27:23.402 [2024-11-20T13:46:35.359Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:23.402 13:46:35 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:23.402 13:46:35 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:23.402 13:46:35 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78106' 00:27:23.402 13:46:35 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78106 00:27:23.402 13:46:35 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78106 00:27:27.639 Remove shared memory files 00:27:27.639 13:46:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:27:27.639 13:46:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:27:27.639 13:46:39 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:27.639 13:46:39 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:27:27.639 13:46:39 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:27:27.639 13:46:39 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:27:27.639 13:46:39 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:27.639 13:46:39 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:27:27.639 ************************************ 00:27:27.639 END TEST ftl_bdevperf 00:27:27.639 ************************************ 00:27:27.639 00:27:27.639 real 0m26.411s 00:27:27.639 user 0m29.084s 00:27:27.639 sys 0m1.355s 00:27:27.639 13:46:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:27.639 13:46:39 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:27.639 13:46:39 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:27:27.639 13:46:39 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:27.639 13:46:39 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:27.639 13:46:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:27.639 ************************************ 00:27:27.639 START TEST ftl_trim 00:27:27.639 ************************************ 00:27:27.639 13:46:39 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:27:27.639 * Looking for test storage... 00:27:27.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:27.640 13:46:39 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:27.640 13:46:39 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:27:27.640 13:46:39 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:27.640 13:46:39 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:27.640 13:46:39 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:27:27.640 13:46:39 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:27.640 13:46:39 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:27.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.640 --rc genhtml_branch_coverage=1 00:27:27.640 --rc genhtml_function_coverage=1 00:27:27.640 --rc genhtml_legend=1 00:27:27.640 --rc geninfo_all_blocks=1 00:27:27.640 --rc geninfo_unexecuted_blocks=1 00:27:27.640 00:27:27.640 ' 00:27:27.640 13:46:39 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:27.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.640 --rc genhtml_branch_coverage=1 00:27:27.640 --rc genhtml_function_coverage=1 00:27:27.640 --rc genhtml_legend=1 00:27:27.640 --rc geninfo_all_blocks=1 00:27:27.640 --rc geninfo_unexecuted_blocks=1 00:27:27.640 00:27:27.640 ' 00:27:27.640 13:46:39 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:27.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.640 --rc genhtml_branch_coverage=1 00:27:27.640 --rc genhtml_function_coverage=1 00:27:27.640 --rc genhtml_legend=1 00:27:27.640 --rc geninfo_all_blocks=1 00:27:27.640 --rc geninfo_unexecuted_blocks=1 00:27:27.640 00:27:27.640 ' 00:27:27.640 13:46:39 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:27.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.640 --rc genhtml_branch_coverage=1 00:27:27.640 --rc genhtml_function_coverage=1 00:27:27.640 --rc genhtml_legend=1 00:27:27.640 --rc geninfo_all_blocks=1 00:27:27.640 --rc geninfo_unexecuted_blocks=1 00:27:27.640 00:27:27.640 ' 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78471 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:27:27.640 13:46:39 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78471 00:27:27.640 13:46:39 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78471 ']' 00:27:27.640 13:46:39 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.640 13:46:39 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:27.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.640 13:46:39 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.640 13:46:39 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:27.640 13:46:39 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:27:27.899 [2024-11-20 13:46:39.601745] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:27:27.899 [2024-11-20 13:46:39.602471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78471 ] 00:27:27.899 [2024-11-20 13:46:39.786530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:28.158 [2024-11-20 13:46:39.905415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.158 [2024-11-20 13:46:39.905557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.158 [2024-11-20 13:46:39.905637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:29.116 13:46:40 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:29.116 13:46:40 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:27:29.116 13:46:40 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:29.116 13:46:40 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:27:29.116 13:46:40 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:29.116 13:46:40 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:27:29.116 13:46:40 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:27:29.116 13:46:40 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:29.374 13:46:41 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:29.374 13:46:41 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:27:29.374 13:46:41 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:29.374 13:46:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:29.374 13:46:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:29.374 13:46:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:29.374 13:46:41 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:29.374 13:46:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:29.632 13:46:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:29.632 { 00:27:29.632 "name": "nvme0n1", 00:27:29.632 "aliases": [ 00:27:29.632 "2263a45a-a974-495e-9306-d142f366c4e8" 00:27:29.632 ], 00:27:29.632 "product_name": "NVMe disk", 00:27:29.632 "block_size": 4096, 00:27:29.632 "num_blocks": 1310720, 00:27:29.632 "uuid": "2263a45a-a974-495e-9306-d142f366c4e8", 00:27:29.632 "numa_id": -1, 00:27:29.632 "assigned_rate_limits": { 00:27:29.632 "rw_ios_per_sec": 0, 00:27:29.632 "rw_mbytes_per_sec": 0, 00:27:29.632 "r_mbytes_per_sec": 0, 00:27:29.632 "w_mbytes_per_sec": 0 00:27:29.632 }, 00:27:29.632 "claimed": true, 00:27:29.632 "claim_type": "read_many_write_one", 00:27:29.632 "zoned": false, 00:27:29.632 "supported_io_types": { 00:27:29.632 "read": true, 00:27:29.632 "write": true, 00:27:29.632 "unmap": true, 00:27:29.632 "flush": true, 00:27:29.632 "reset": true, 00:27:29.632 "nvme_admin": true, 00:27:29.632 "nvme_io": true, 00:27:29.632 "nvme_io_md": false, 00:27:29.632 "write_zeroes": true, 00:27:29.632 "zcopy": false, 00:27:29.632 "get_zone_info": false, 00:27:29.632 "zone_management": false, 00:27:29.632 "zone_append": false, 00:27:29.632 "compare": true, 00:27:29.632 "compare_and_write": false, 00:27:29.632 "abort": true, 00:27:29.632 "seek_hole": false, 00:27:29.632 "seek_data": false, 00:27:29.632 "copy": true, 00:27:29.632 "nvme_iov_md": false 00:27:29.632 }, 00:27:29.632 "driver_specific": { 00:27:29.632 "nvme": [ 00:27:29.632 { 00:27:29.632 "pci_address": "0000:00:11.0", 00:27:29.632 "trid": { 00:27:29.632 "trtype": "PCIe", 00:27:29.632 "traddr": "0000:00:11.0" 00:27:29.632 }, 00:27:29.632 "ctrlr_data": { 00:27:29.632 "cntlid": 0, 00:27:29.632 "vendor_id": "0x1b36", 00:27:29.632 "model_number": "QEMU NVMe Ctrl", 00:27:29.632 "serial_number": "12341", 00:27:29.632 "firmware_revision": "8.0.0", 00:27:29.632 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:29.632 "oacs": { 00:27:29.632 "security": 0, 00:27:29.632 "format": 1, 00:27:29.632 "firmware": 0, 00:27:29.632 "ns_manage": 1 00:27:29.632 }, 00:27:29.632 "multi_ctrlr": false, 00:27:29.632 "ana_reporting": false 00:27:29.632 }, 00:27:29.632 "vs": { 00:27:29.632 "nvme_version": "1.4" 00:27:29.632 }, 00:27:29.632 "ns_data": { 00:27:29.632 "id": 1, 00:27:29.632 "can_share": false 00:27:29.632 } 00:27:29.632 } 00:27:29.632 ], 00:27:29.632 "mp_policy": "active_passive" 00:27:29.632 } 00:27:29.632 } 00:27:29.632 ]' 00:27:29.632 13:46:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:29.632 13:46:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:29.632 13:46:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:29.632 13:46:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:29.632 13:46:41 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:29.632 13:46:41 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:27:29.632 13:46:41 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:27:29.632 13:46:41 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:29.632 13:46:41 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:27:29.632 13:46:41 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:29.632 13:46:41 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:29.891 13:46:41 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=abd89071-0832-42c2-ab85-0c8dfa3efaa4 00:27:29.891 13:46:41 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:27:29.891 13:46:41 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u abd89071-0832-42c2-ab85-0c8dfa3efaa4 00:27:30.149 13:46:41 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:30.408 13:46:42 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=91fb88d4-9bc4-48f0-b56f-94039f7e469e 00:27:30.409 13:46:42 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 91fb88d4-9bc4-48f0-b56f-94039f7e469e 00:27:30.667 13:46:42 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=b690a79e-12f9-4856-b295-4b71426d0631 00:27:30.667 13:46:42 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b690a79e-12f9-4856-b295-4b71426d0631 00:27:30.667 13:46:42 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:27:30.667 13:46:42 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:30.667 13:46:42 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=b690a79e-12f9-4856-b295-4b71426d0631 00:27:30.667 13:46:42 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:27:30.667 13:46:42 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size b690a79e-12f9-4856-b295-4b71426d0631 00:27:30.667 13:46:42 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=b690a79e-12f9-4856-b295-4b71426d0631 00:27:30.667 13:46:42 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:30.667 13:46:42 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:30.667 13:46:42 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:30.667 13:46:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b690a79e-12f9-4856-b295-4b71426d0631 00:27:30.925 13:46:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:30.925 { 00:27:30.925 "name": "b690a79e-12f9-4856-b295-4b71426d0631", 00:27:30.925 "aliases": [ 00:27:30.925 "lvs/nvme0n1p0" 00:27:30.925 ], 00:27:30.925 "product_name": "Logical Volume", 00:27:30.925 "block_size": 4096, 00:27:30.925 "num_blocks": 26476544, 00:27:30.925 "uuid": "b690a79e-12f9-4856-b295-4b71426d0631", 00:27:30.925 "assigned_rate_limits": { 00:27:30.925 "rw_ios_per_sec": 0, 00:27:30.925 "rw_mbytes_per_sec": 0, 00:27:30.925 "r_mbytes_per_sec": 0, 00:27:30.925 "w_mbytes_per_sec": 0 00:27:30.925 }, 00:27:30.925 "claimed": false, 00:27:30.925 "zoned": false, 00:27:30.925 "supported_io_types": { 00:27:30.925 "read": true, 00:27:30.925 "write": true, 00:27:30.925 "unmap": true, 00:27:30.925 "flush": false, 00:27:30.925 "reset": true, 00:27:30.925 "nvme_admin": false, 00:27:30.925 "nvme_io": false, 00:27:30.925 "nvme_io_md": false, 00:27:30.925 "write_zeroes": true, 00:27:30.925 "zcopy": false, 00:27:30.925 "get_zone_info": false, 00:27:30.925 "zone_management": false, 00:27:30.925 "zone_append": false, 00:27:30.925 "compare": false, 00:27:30.925 "compare_and_write": false, 00:27:30.925 "abort": false, 00:27:30.925 "seek_hole": true, 00:27:30.925 "seek_data": true, 00:27:30.925 "copy": false, 00:27:30.925 "nvme_iov_md": false 00:27:30.925 }, 00:27:30.925 "driver_specific": { 00:27:30.925 "lvol": { 00:27:30.925 "lvol_store_uuid": "91fb88d4-9bc4-48f0-b56f-94039f7e469e", 00:27:30.925 "base_bdev": "nvme0n1", 00:27:30.925 "thin_provision": true, 00:27:30.925 "num_allocated_clusters": 0, 00:27:30.925 "snapshot": false, 00:27:30.925 "clone": false, 00:27:30.925 "esnap_clone": false 00:27:30.925 } 00:27:30.925 } 00:27:30.925 } 00:27:30.925 ]' 00:27:30.925 13:46:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:30.925 13:46:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:30.925 13:46:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:30.925 13:46:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:30.925 13:46:42 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:30.925 13:46:42 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:27:30.925 13:46:42 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:27:30.925 13:46:42 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:27:30.925 13:46:42 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:31.493 13:46:43 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:31.493 13:46:43 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:31.493 13:46:43 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size b690a79e-12f9-4856-b295-4b71426d0631 00:27:31.493 13:46:43 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=b690a79e-12f9-4856-b295-4b71426d0631 00:27:31.493 13:46:43 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:31.493 13:46:43 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:31.493 13:46:43 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:31.493 13:46:43 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b690a79e-12f9-4856-b295-4b71426d0631 00:27:31.752 13:46:43 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:31.752 { 00:27:31.752 "name": "b690a79e-12f9-4856-b295-4b71426d0631", 00:27:31.752 "aliases": [ 00:27:31.752 "lvs/nvme0n1p0" 00:27:31.752 ], 00:27:31.752 "product_name": "Logical Volume", 00:27:31.752 "block_size": 4096, 00:27:31.752 "num_blocks": 26476544, 00:27:31.752 "uuid": "b690a79e-12f9-4856-b295-4b71426d0631", 00:27:31.752 "assigned_rate_limits": { 00:27:31.752 "rw_ios_per_sec": 0, 00:27:31.752 "rw_mbytes_per_sec": 0, 00:27:31.752 "r_mbytes_per_sec": 0, 00:27:31.752 "w_mbytes_per_sec": 0 00:27:31.752 }, 00:27:31.752 "claimed": false, 00:27:31.752 "zoned": false, 00:27:31.752 "supported_io_types": { 00:27:31.752 "read": true, 00:27:31.752 "write": true, 00:27:31.752 "unmap": true, 00:27:31.752 "flush": false, 00:27:31.752 "reset": true, 00:27:31.752 "nvme_admin": false, 00:27:31.752 "nvme_io": false, 00:27:31.752 "nvme_io_md": false, 00:27:31.752 "write_zeroes": true, 00:27:31.752 "zcopy": false, 00:27:31.752 "get_zone_info": false, 00:27:31.752 "zone_management": false, 00:27:31.752 "zone_append": false, 00:27:31.752 "compare": false, 00:27:31.752 "compare_and_write": false, 00:27:31.752 "abort": false, 00:27:31.752 "seek_hole": true, 00:27:31.752 "seek_data": true, 00:27:31.752 "copy": false, 00:27:31.752 "nvme_iov_md": false 00:27:31.752 }, 00:27:31.752 "driver_specific": { 00:27:31.752 "lvol": { 00:27:31.752 "lvol_store_uuid": "91fb88d4-9bc4-48f0-b56f-94039f7e469e", 00:27:31.752 "base_bdev": "nvme0n1", 00:27:31.752 "thin_provision": true, 00:27:31.752 "num_allocated_clusters": 0, 00:27:31.752 "snapshot": false, 00:27:31.752 "clone": false, 00:27:31.752 "esnap_clone": false 00:27:31.752 } 00:27:31.752 } 00:27:31.752 } 00:27:31.752 ]' 00:27:31.752 13:46:43 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:31.752 13:46:43 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:31.752 13:46:43 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:31.752 13:46:43 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:31.752 13:46:43 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:31.752 13:46:43 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:27:31.752 13:46:43 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:27:31.752 13:46:43 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:32.011 13:46:43 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:27:32.011 13:46:43 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:27:32.011 13:46:43 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size b690a79e-12f9-4856-b295-4b71426d0631 00:27:32.011 13:46:43 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=b690a79e-12f9-4856-b295-4b71426d0631 00:27:32.011 13:46:43 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:32.011 13:46:43 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:32.011 13:46:43 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:32.011 13:46:43 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b690a79e-12f9-4856-b295-4b71426d0631 00:27:32.269 13:46:44 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:32.269 { 00:27:32.269 "name": "b690a79e-12f9-4856-b295-4b71426d0631", 00:27:32.269 "aliases": [ 00:27:32.269 "lvs/nvme0n1p0" 00:27:32.269 ], 00:27:32.269 "product_name": "Logical Volume", 00:27:32.269 "block_size": 4096, 00:27:32.269 "num_blocks": 26476544, 00:27:32.269 "uuid": "b690a79e-12f9-4856-b295-4b71426d0631", 00:27:32.269 "assigned_rate_limits": { 00:27:32.269 "rw_ios_per_sec": 0, 00:27:32.269 "rw_mbytes_per_sec": 0, 00:27:32.269 "r_mbytes_per_sec": 0, 00:27:32.269 "w_mbytes_per_sec": 0 00:27:32.269 }, 00:27:32.269 "claimed": false, 00:27:32.269 "zoned": false, 00:27:32.269 "supported_io_types": { 00:27:32.269 "read": true, 00:27:32.269 "write": true, 00:27:32.269 "unmap": true, 00:27:32.270 "flush": false, 00:27:32.270 "reset": true, 00:27:32.270 "nvme_admin": false, 00:27:32.270 "nvme_io": false, 00:27:32.270 "nvme_io_md": false, 00:27:32.270 "write_zeroes": true, 00:27:32.270 "zcopy": false, 00:27:32.270 "get_zone_info": false, 00:27:32.270 "zone_management": false, 00:27:32.270 "zone_append": false, 00:27:32.270 "compare": false, 00:27:32.270 "compare_and_write": false, 00:27:32.270 "abort": false, 00:27:32.270 "seek_hole": true, 00:27:32.270 "seek_data": true, 00:27:32.270 "copy": false, 00:27:32.270 "nvme_iov_md": false 00:27:32.270 }, 00:27:32.270 "driver_specific": { 00:27:32.270 "lvol": { 00:27:32.270 "lvol_store_uuid": "91fb88d4-9bc4-48f0-b56f-94039f7e469e", 00:27:32.270 "base_bdev": "nvme0n1", 00:27:32.270 "thin_provision": true, 00:27:32.270 "num_allocated_clusters": 0, 00:27:32.270 "snapshot": false, 00:27:32.270 "clone": false, 00:27:32.270 "esnap_clone": false 00:27:32.270 } 00:27:32.270 } 00:27:32.270 } 00:27:32.270 ]' 00:27:32.270 13:46:44 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:32.270 13:46:44 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:32.270 13:46:44 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:32.270 13:46:44 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:32.270 13:46:44 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:32.270 13:46:44 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:27:32.270 13:46:44 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:27:32.270 13:46:44 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b690a79e-12f9-4856-b295-4b71426d0631 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:27:32.529 [2024-11-20 13:46:44.471759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.529 [2024-11-20 13:46:44.471823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:32.529 [2024-11-20 13:46:44.471861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:32.529 [2024-11-20 13:46:44.471873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.529 [2024-11-20 13:46:44.475586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.529 [2024-11-20 13:46:44.475645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:32.529 [2024-11-20 13:46:44.475663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.667 ms 00:27:32.529 [2024-11-20 13:46:44.475674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.529 [2024-11-20 13:46:44.475883] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:32.529 [2024-11-20 13:46:44.476954] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:32.529 [2024-11-20 13:46:44.476994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.529 [2024-11-20 13:46:44.477007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:32.529 [2024-11-20 13:46:44.477021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:27:32.529 [2024-11-20 13:46:44.477032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.529 [2024-11-20 13:46:44.477252] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 95612c22-11a0-46d9-b67f-3ffaf6f746c4 00:27:32.529 [2024-11-20 13:46:44.478828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.529 [2024-11-20 13:46:44.478868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:32.529 [2024-11-20 13:46:44.478882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:32.529 [2024-11-20 13:46:44.478897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.788 [2024-11-20 13:46:44.486833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.788 [2024-11-20 13:46:44.486873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:32.788 [2024-11-20 13:46:44.486889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.763 ms 00:27:32.788 [2024-11-20 13:46:44.486905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.788 [2024-11-20 13:46:44.487120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.788 [2024-11-20 13:46:44.487146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:32.788 [2024-11-20 13:46:44.487160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:27:32.788 [2024-11-20 13:46:44.487178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.788 [2024-11-20 13:46:44.487249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.788 [2024-11-20 13:46:44.487268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:32.788 [2024-11-20 13:46:44.487291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:32.788 [2024-11-20 13:46:44.487308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.788 [2024-11-20 13:46:44.487372] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:32.788 [2024-11-20 13:46:44.492796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.788 [2024-11-20 13:46:44.492842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:32.788 [2024-11-20 13:46:44.492860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.438 ms 00:27:32.788 [2024-11-20 13:46:44.492871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.788 [2024-11-20 13:46:44.493021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.788 [2024-11-20 13:46:44.493044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:32.788 [2024-11-20 13:46:44.493060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:32.788 [2024-11-20 13:46:44.493088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.788 [2024-11-20 13:46:44.493157] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:32.788 [2024-11-20 13:46:44.493313] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:32.788 [2024-11-20 13:46:44.493344] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:32.788 [2024-11-20 13:46:44.493360] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:32.788 [2024-11-20 13:46:44.493379] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:32.788 [2024-11-20 13:46:44.493392] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:32.788 [2024-11-20 13:46:44.493408] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:32.788 [2024-11-20 13:46:44.493419] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:32.788 [2024-11-20 13:46:44.493432] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:32.788 [2024-11-20 13:46:44.493446] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:32.788 [2024-11-20 13:46:44.493472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.788 [2024-11-20 13:46:44.493483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:32.788 [2024-11-20 13:46:44.493497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:27:32.788 [2024-11-20 13:46:44.493508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.788 [2024-11-20 13:46:44.493667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.788 [2024-11-20 13:46:44.493689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:32.788 [2024-11-20 13:46:44.493704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:27:32.788 [2024-11-20 13:46:44.493714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.788 [2024-11-20 13:46:44.493919] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:32.788 [2024-11-20 13:46:44.493937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:32.788 [2024-11-20 13:46:44.493951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:32.788 [2024-11-20 13:46:44.493963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.788 [2024-11-20 13:46:44.493977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:32.788 [2024-11-20 13:46:44.494002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:32.788 [2024-11-20 13:46:44.494016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:32.788 [2024-11-20 13:46:44.494026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:32.788 [2024-11-20 13:46:44.494039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:32.788 [2024-11-20 13:46:44.494049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:32.788 [2024-11-20 13:46:44.494062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:32.788 [2024-11-20 13:46:44.494072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:32.788 [2024-11-20 13:46:44.494084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:32.788 [2024-11-20 13:46:44.494095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:32.788 [2024-11-20 13:46:44.494109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:32.788 [2024-11-20 13:46:44.494119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.788 [2024-11-20 13:46:44.494135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:32.788 [2024-11-20 13:46:44.494145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:32.788 [2024-11-20 13:46:44.494158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.788 [2024-11-20 13:46:44.494168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:32.788 [2024-11-20 13:46:44.494192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:32.788 [2024-11-20 13:46:44.494202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.788 [2024-11-20 13:46:44.494214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:32.788 [2024-11-20 13:46:44.494225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:32.788 [2024-11-20 13:46:44.494237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.788 [2024-11-20 13:46:44.494247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:32.788 [2024-11-20 13:46:44.494260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:32.788 [2024-11-20 13:46:44.494270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.788 [2024-11-20 13:46:44.494282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:32.789 [2024-11-20 13:46:44.494293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:32.789 [2024-11-20 13:46:44.494306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.789 [2024-11-20 13:46:44.494317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:32.789 [2024-11-20 13:46:44.494332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:32.789 [2024-11-20 13:46:44.494342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:32.789 [2024-11-20 13:46:44.494355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:32.789 [2024-11-20 13:46:44.494365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:32.789 [2024-11-20 13:46:44.494377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:32.789 [2024-11-20 13:46:44.494387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:32.789 [2024-11-20 13:46:44.494400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:32.789 [2024-11-20 13:46:44.494410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.789 [2024-11-20 13:46:44.494435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:32.789 [2024-11-20 13:46:44.494444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:32.789 [2024-11-20 13:46:44.494456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.789 [2024-11-20 13:46:44.494465] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:32.789 [2024-11-20 13:46:44.494478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:32.789 [2024-11-20 13:46:44.494489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:32.789 [2024-11-20 13:46:44.494503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.789 [2024-11-20 13:46:44.494514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:32.789 [2024-11-20 13:46:44.494533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:32.789 [2024-11-20 13:46:44.494543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:32.789 [2024-11-20 13:46:44.494556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:32.789 [2024-11-20 13:46:44.494566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:32.789 [2024-11-20 13:46:44.494579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:32.789 [2024-11-20 13:46:44.494594] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:32.789 [2024-11-20 13:46:44.494620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:32.789 [2024-11-20 13:46:44.494637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:32.789 [2024-11-20 13:46:44.494651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:32.789 [2024-11-20 13:46:44.494662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:32.789 [2024-11-20 13:46:44.494676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:32.789 [2024-11-20 13:46:44.494687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:32.789 [2024-11-20 13:46:44.494717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:32.789 [2024-11-20 13:46:44.494728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:32.789 [2024-11-20 13:46:44.494741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:32.789 [2024-11-20 13:46:44.494753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:32.789 [2024-11-20 13:46:44.494769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:32.789 [2024-11-20 13:46:44.494780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:32.789 [2024-11-20 13:46:44.494793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:32.789 [2024-11-20 13:46:44.494804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:32.789 [2024-11-20 13:46:44.494819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:32.789 [2024-11-20 13:46:44.494831] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:32.789 [2024-11-20 13:46:44.494869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:32.789 [2024-11-20 13:46:44.494890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:32.789 [2024-11-20 13:46:44.494905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:32.789 [2024-11-20 13:46:44.494916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:32.789 [2024-11-20 13:46:44.494930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:32.789 [2024-11-20 13:46:44.494943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.789 [2024-11-20 13:46:44.494957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:32.789 [2024-11-20 13:46:44.494969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.098 ms 00:27:32.789 [2024-11-20 13:46:44.494983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.789 [2024-11-20 13:46:44.495148] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:32.789 [2024-11-20 13:46:44.495178] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:36.979 [2024-11-20 13:46:48.153471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.979 [2024-11-20 13:46:48.153553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:36.979 [2024-11-20 13:46:48.153572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3664.259 ms 00:27:36.979 [2024-11-20 13:46:48.153586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.979 [2024-11-20 13:46:48.193031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.979 [2024-11-20 13:46:48.193099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:36.979 [2024-11-20 13:46:48.193116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.026 ms 00:27:36.980 [2024-11-20 13:46:48.193130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.980 [2024-11-20 13:46:48.193384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.980 [2024-11-20 13:46:48.193409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:36.980 [2024-11-20 13:46:48.193422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:27:36.980 [2024-11-20 13:46:48.193438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.980 [2024-11-20 13:46:48.256003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.980 [2024-11-20 13:46:48.256070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:36.980 [2024-11-20 13:46:48.256087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.576 ms 00:27:36.980 [2024-11-20 13:46:48.256102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.980 [2024-11-20 13:46:48.256291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.980 [2024-11-20 13:46:48.256321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:36.980 [2024-11-20 13:46:48.256333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:36.980 [2024-11-20 13:46:48.256346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.980 [2024-11-20 13:46:48.256878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.980 [2024-11-20 13:46:48.256907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:36.980 [2024-11-20 13:46:48.256927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:27:36.980 [2024-11-20 13:46:48.256940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.980 [2024-11-20 13:46:48.257106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.980 [2024-11-20 13:46:48.257126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:36.980 [2024-11-20 13:46:48.257137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:27:36.980 [2024-11-20 13:46:48.257153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.980 [2024-11-20 13:46:48.280479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.980 [2024-11-20 13:46:48.280545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:36.980 [2024-11-20 13:46:48.280562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.257 ms 00:27:36.980 [2024-11-20 13:46:48.280575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.980 [2024-11-20 13:46:48.294334] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:36.980 [2024-11-20 13:46:48.311052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.980 [2024-11-20 13:46:48.311116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:36.980 [2024-11-20 13:46:48.311134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.241 ms 00:27:36.980 [2024-11-20 13:46:48.311145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.980 [2024-11-20 13:46:48.419403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.980 [2024-11-20 13:46:48.419479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:36.980 [2024-11-20 13:46:48.419500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.210 ms 00:27:36.980 [2024-11-20 13:46:48.419512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.980 [2024-11-20 13:46:48.419876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.980 [2024-11-20 13:46:48.419900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:36.980 [2024-11-20 13:46:48.419919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:27:36.980 [2024-11-20 13:46:48.419930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.980 [2024-11-20 13:46:48.458507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.980 [2024-11-20 13:46:48.458585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:36.980 [2024-11-20 13:46:48.458616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.568 ms 00:27:36.980 [2024-11-20 13:46:48.458628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.980 [2024-11-20 13:46:48.495717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.980 [2024-11-20 13:46:48.495782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:36.980 [2024-11-20 13:46:48.495804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.982 ms 00:27:36.980 [2024-11-20 13:46:48.495815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.980 [2024-11-20 13:46:48.496801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.980 [2024-11-20 13:46:48.496837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:36.980 [2024-11-20 13:46:48.496852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:27:36.980 [2024-11-20 13:46:48.496863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.980 [2024-11-20 13:46:48.605417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.980 [2024-11-20 13:46:48.605492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:36.980 [2024-11-20 13:46:48.605516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.627 ms 00:27:36.980 [2024-11-20 13:46:48.605528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.980 [2024-11-20 13:46:48.644956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.980 [2024-11-20 13:46:48.645022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:36.980 [2024-11-20 13:46:48.645042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.224 ms 00:27:36.980 [2024-11-20 13:46:48.645054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.980 [2024-11-20 13:46:48.683460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.980 [2024-11-20 13:46:48.683527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:36.980 [2024-11-20 13:46:48.683548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.265 ms 00:27:36.980 [2024-11-20 13:46:48.683558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.980 [2024-11-20 13:46:48.722711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.980 [2024-11-20 13:46:48.722792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:36.980 [2024-11-20 13:46:48.722828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.000 ms 00:27:36.980 [2024-11-20 13:46:48.722863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.980 [2024-11-20 13:46:48.723137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.980 [2024-11-20 13:46:48.723169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:36.980 [2024-11-20 13:46:48.723189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:36.980 [2024-11-20 13:46:48.723200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.980 [2024-11-20 13:46:48.723378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.980 [2024-11-20 13:46:48.723401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:36.980 [2024-11-20 13:46:48.723415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:27:36.980 [2024-11-20 13:46:48.723425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.980 [2024-11-20 13:46:48.724723] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:36.980 [2024-11-20 13:46:48.729844] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4259.463 ms, result 0 00:27:36.980 [2024-11-20 13:46:48.731247] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:36.980 { 00:27:36.980 "name": "ftl0", 00:27:36.980 "uuid": "95612c22-11a0-46d9-b67f-3ffaf6f746c4" 00:27:36.980 } 00:27:36.980 13:46:48 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:27:36.980 13:46:48 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:27:36.980 13:46:48 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:36.980 13:46:48 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:27:36.980 13:46:48 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:36.980 13:46:48 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:36.980 13:46:48 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:37.239 13:46:48 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:27:37.497 [ 00:27:37.497 { 00:27:37.497 "name": "ftl0", 00:27:37.497 "aliases": [ 00:27:37.498 "95612c22-11a0-46d9-b67f-3ffaf6f746c4" 00:27:37.498 ], 00:27:37.498 "product_name": "FTL disk", 00:27:37.498 "block_size": 4096, 00:27:37.498 "num_blocks": 23592960, 00:27:37.498 "uuid": "95612c22-11a0-46d9-b67f-3ffaf6f746c4", 00:27:37.498 "assigned_rate_limits": { 00:27:37.498 "rw_ios_per_sec": 0, 00:27:37.498 "rw_mbytes_per_sec": 0, 00:27:37.498 "r_mbytes_per_sec": 0, 00:27:37.498 "w_mbytes_per_sec": 0 00:27:37.498 }, 00:27:37.498 "claimed": false, 00:27:37.498 "zoned": false, 00:27:37.498 "supported_io_types": { 00:27:37.498 "read": true, 00:27:37.498 "write": true, 00:27:37.498 "unmap": true, 00:27:37.498 "flush": true, 00:27:37.498 "reset": false, 00:27:37.498 "nvme_admin": false, 00:27:37.498 "nvme_io": false, 00:27:37.498 "nvme_io_md": false, 00:27:37.498 "write_zeroes": true, 00:27:37.498 "zcopy": false, 00:27:37.498 "get_zone_info": false, 00:27:37.498 "zone_management": false, 00:27:37.498 "zone_append": false, 00:27:37.498 "compare": false, 00:27:37.498 "compare_and_write": false, 00:27:37.498 "abort": false, 00:27:37.498 "seek_hole": false, 00:27:37.498 "seek_data": false, 00:27:37.498 "copy": false, 00:27:37.498 "nvme_iov_md": false 00:27:37.498 }, 00:27:37.498 "driver_specific": { 00:27:37.498 "ftl": { 00:27:37.498 "base_bdev": "b690a79e-12f9-4856-b295-4b71426d0631", 00:27:37.498 "cache": "nvc0n1p0" 00:27:37.498 } 00:27:37.498 } 00:27:37.498 } 00:27:37.498 ] 00:27:37.498 13:46:49 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:27:37.498 13:46:49 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:27:37.498 13:46:49 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:37.498 13:46:49 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:27:37.498 13:46:49 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:27:37.756 13:46:49 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:27:37.756 { 00:27:37.756 "name": "ftl0", 00:27:37.756 "aliases": [ 00:27:37.756 "95612c22-11a0-46d9-b67f-3ffaf6f746c4" 00:27:37.756 ], 00:27:37.756 "product_name": "FTL disk", 00:27:37.756 "block_size": 4096, 00:27:37.756 "num_blocks": 23592960, 00:27:37.756 "uuid": "95612c22-11a0-46d9-b67f-3ffaf6f746c4", 00:27:37.756 "assigned_rate_limits": { 00:27:37.757 "rw_ios_per_sec": 0, 00:27:37.757 "rw_mbytes_per_sec": 0, 00:27:37.757 "r_mbytes_per_sec": 0, 00:27:37.757 "w_mbytes_per_sec": 0 00:27:37.757 }, 00:27:37.757 "claimed": false, 00:27:37.757 "zoned": false, 00:27:37.757 "supported_io_types": { 00:27:37.757 "read": true, 00:27:37.757 "write": true, 00:27:37.757 "unmap": true, 00:27:37.757 "flush": true, 00:27:37.757 "reset": false, 00:27:37.757 "nvme_admin": false, 00:27:37.757 "nvme_io": false, 00:27:37.757 "nvme_io_md": false, 00:27:37.757 "write_zeroes": true, 00:27:37.757 "zcopy": false, 00:27:37.757 "get_zone_info": false, 00:27:37.757 "zone_management": false, 00:27:37.757 "zone_append": false, 00:27:37.757 "compare": false, 00:27:37.757 "compare_and_write": false, 00:27:37.757 "abort": false, 00:27:37.757 "seek_hole": false, 00:27:37.757 "seek_data": false, 00:27:37.757 "copy": false, 00:27:37.757 "nvme_iov_md": false 00:27:37.757 }, 00:27:37.757 "driver_specific": { 00:27:37.757 "ftl": { 00:27:37.757 "base_bdev": "b690a79e-12f9-4856-b295-4b71426d0631", 00:27:37.757 "cache": "nvc0n1p0" 00:27:37.757 } 00:27:37.757 } 00:27:37.757 } 00:27:37.757 ]' 00:27:37.757 13:46:49 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:27:38.015 13:46:49 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:27:38.015 13:46:49 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:38.015 [2024-11-20 13:46:49.959430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.015 [2024-11-20 13:46:49.959505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:38.015 [2024-11-20 13:46:49.959526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:38.015 [2024-11-20 13:46:49.959543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.015 [2024-11-20 13:46:49.959643] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:38.015 [2024-11-20 13:46:49.963979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.015 [2024-11-20 13:46:49.964017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:38.015 [2024-11-20 13:46:49.964039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.317 ms 00:27:38.015 [2024-11-20 13:46:49.964050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.015 [2024-11-20 13:46:49.965275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.015 [2024-11-20 13:46:49.965303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:38.015 [2024-11-20 13:46:49.965318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:27:38.015 [2024-11-20 13:46:49.965329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.015 [2024-11-20 13:46:49.968201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.015 [2024-11-20 13:46:49.968231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:38.015 [2024-11-20 13:46:49.968246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.813 ms 00:27:38.015 [2024-11-20 13:46:49.968257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.275 [2024-11-20 13:46:49.974050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.275 [2024-11-20 13:46:49.974089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:38.276 [2024-11-20 13:46:49.974105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.681 ms 00:27:38.276 [2024-11-20 13:46:49.974115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.276 [2024-11-20 13:46:50.012330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.276 [2024-11-20 13:46:50.012407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:38.276 [2024-11-20 13:46:50.012432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.107 ms 00:27:38.276 [2024-11-20 13:46:50.012443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.276 [2024-11-20 13:46:50.036441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.276 [2024-11-20 13:46:50.036507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:38.276 [2024-11-20 13:46:50.036530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.813 ms 00:27:38.276 [2024-11-20 13:46:50.036547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.276 [2024-11-20 13:46:50.037078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.276 [2024-11-20 13:46:50.037107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:38.276 [2024-11-20 13:46:50.037122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:27:38.276 [2024-11-20 13:46:50.037135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.276 [2024-11-20 13:46:50.074678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.276 [2024-11-20 13:46:50.074735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:38.276 [2024-11-20 13:46:50.074755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.536 ms 00:27:38.276 [2024-11-20 13:46:50.074766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.276 [2024-11-20 13:46:50.111425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.276 [2024-11-20 13:46:50.111487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:38.276 [2024-11-20 13:46:50.111512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.517 ms 00:27:38.276 [2024-11-20 13:46:50.111522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.276 [2024-11-20 13:46:50.147380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.276 [2024-11-20 13:46:50.147439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:38.276 [2024-11-20 13:46:50.147459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.694 ms 00:27:38.276 [2024-11-20 13:46:50.147470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.276 [2024-11-20 13:46:50.183596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.276 [2024-11-20 13:46:50.183664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:38.276 [2024-11-20 13:46:50.183684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.855 ms 00:27:38.276 [2024-11-20 13:46:50.183695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.276 [2024-11-20 13:46:50.183906] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:38.276 [2024-11-20 13:46:50.183934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.183951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.183962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.183976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.183987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:38.276 [2024-11-20 13:46:50.184657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.184999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.185012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.185022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.185038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.185049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.185062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.185073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.185086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.185097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.185110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.185120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.185134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.185145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.185162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.185174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.185187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.185198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.185213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:38.277 [2024-11-20 13:46:50.185232] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:38.277 [2024-11-20 13:46:50.185247] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 95612c22-11a0-46d9-b67f-3ffaf6f746c4 00:27:38.277 [2024-11-20 13:46:50.185259] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:38.277 [2024-11-20 13:46:50.185272] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:38.277 [2024-11-20 13:46:50.185282] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:38.277 [2024-11-20 13:46:50.185299] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:38.277 [2024-11-20 13:46:50.185309] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:38.277 [2024-11-20 13:46:50.185323] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:38.277 [2024-11-20 13:46:50.185333] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:38.277 [2024-11-20 13:46:50.185345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:38.277 [2024-11-20 13:46:50.185354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:38.277 [2024-11-20 13:46:50.185369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.277 [2024-11-20 13:46:50.185380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:38.277 [2024-11-20 13:46:50.185393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.468 ms 00:27:38.277 [2024-11-20 13:46:50.185403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.277 [2024-11-20 13:46:50.205249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.277 [2024-11-20 13:46:50.205298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:38.277 [2024-11-20 13:46:50.205320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.807 ms 00:27:38.277 [2024-11-20 13:46:50.205330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.277 [2024-11-20 13:46:50.205977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.277 [2024-11-20 13:46:50.205997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:38.277 [2024-11-20 13:46:50.206013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:27:38.277 [2024-11-20 13:46:50.206023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.537 [2024-11-20 13:46:50.274706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.537 [2024-11-20 13:46:50.274770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:38.537 [2024-11-20 13:46:50.274789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.537 [2024-11-20 13:46:50.274800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.537 [2024-11-20 13:46:50.275033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.537 [2024-11-20 13:46:50.275053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:38.537 [2024-11-20 13:46:50.275069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.537 [2024-11-20 13:46:50.275080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.537 [2024-11-20 13:46:50.275203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.537 [2024-11-20 13:46:50.275222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:38.537 [2024-11-20 13:46:50.275243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.537 [2024-11-20 13:46:50.275253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.537 [2024-11-20 13:46:50.275314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.537 [2024-11-20 13:46:50.275327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:38.537 [2024-11-20 13:46:50.275340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.537 [2024-11-20 13:46:50.275351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.537 [2024-11-20 13:46:50.410393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.537 [2024-11-20 13:46:50.410463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:38.537 [2024-11-20 13:46:50.410486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.537 [2024-11-20 13:46:50.410497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.796 [2024-11-20 13:46:50.514397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.796 [2024-11-20 13:46:50.514473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:38.796 [2024-11-20 13:46:50.514492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.796 [2024-11-20 13:46:50.514504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.796 [2024-11-20 13:46:50.514730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.796 [2024-11-20 13:46:50.514745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:38.796 [2024-11-20 13:46:50.514784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.796 [2024-11-20 13:46:50.514811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.796 [2024-11-20 13:46:50.514951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.796 [2024-11-20 13:46:50.514966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:38.796 [2024-11-20 13:46:50.514980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.796 [2024-11-20 13:46:50.514990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.796 [2024-11-20 13:46:50.515175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.796 [2024-11-20 13:46:50.515195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:38.796 [2024-11-20 13:46:50.515210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.796 [2024-11-20 13:46:50.515224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.796 [2024-11-20 13:46:50.515307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.796 [2024-11-20 13:46:50.515326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:38.796 [2024-11-20 13:46:50.515339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.796 [2024-11-20 13:46:50.515350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.796 [2024-11-20 13:46:50.515433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.796 [2024-11-20 13:46:50.515445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:38.796 [2024-11-20 13:46:50.515462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.796 [2024-11-20 13:46:50.515472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.796 [2024-11-20 13:46:50.515572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.796 [2024-11-20 13:46:50.515586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:38.796 [2024-11-20 13:46:50.515614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.796 [2024-11-20 13:46:50.515625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.796 [2024-11-20 13:46:50.515968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 557.428 ms, result 0 00:27:38.796 true 00:27:38.796 13:46:50 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78471 00:27:38.796 13:46:50 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78471 ']' 00:27:38.796 13:46:50 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78471 00:27:38.796 13:46:50 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:27:38.796 13:46:50 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:38.796 13:46:50 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78471 00:27:38.796 13:46:50 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:38.796 killing process with pid 78471 00:27:38.796 13:46:50 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:38.796 13:46:50 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78471' 00:27:38.796 13:46:50 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78471 00:27:38.796 13:46:50 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78471 00:27:44.067 13:46:55 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:27:45.003 65536+0 records in 00:27:45.003 65536+0 records out 00:27:45.003 268435456 bytes (268 MB, 256 MiB) copied, 1.0625 s, 253 MB/s 00:27:45.003 13:46:56 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:45.003 [2024-11-20 13:46:56.792470] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:27:45.003 [2024-11-20 13:46:56.792644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78688 ] 00:27:45.263 [2024-11-20 13:46:56.980678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.263 [2024-11-20 13:46:57.102589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.833 [2024-11-20 13:46:57.489986] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:45.833 [2024-11-20 13:46:57.490086] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:45.833 [2024-11-20 13:46:57.656398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.833 [2024-11-20 13:46:57.656474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:45.833 [2024-11-20 13:46:57.656492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:45.833 [2024-11-20 13:46:57.656504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.833 [2024-11-20 13:46:57.659853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.833 [2024-11-20 13:46:57.659904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:45.833 [2024-11-20 13:46:57.659918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.331 ms 00:27:45.833 [2024-11-20 13:46:57.659930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.833 [2024-11-20 13:46:57.660058] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:45.833 [2024-11-20 13:46:57.661051] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:45.833 [2024-11-20 13:46:57.661087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.833 [2024-11-20 13:46:57.661098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:45.833 [2024-11-20 13:46:57.661110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.042 ms 00:27:45.833 [2024-11-20 13:46:57.661121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.833 [2024-11-20 13:46:57.662691] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:45.833 [2024-11-20 13:46:57.683652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.833 [2024-11-20 13:46:57.683739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:45.833 [2024-11-20 13:46:57.683758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.991 ms 00:27:45.833 [2024-11-20 13:46:57.683769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.833 [2024-11-20 13:46:57.683979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.833 [2024-11-20 13:46:57.683995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:45.833 [2024-11-20 13:46:57.684007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:45.833 [2024-11-20 13:46:57.684017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.833 [2024-11-20 13:46:57.691609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.833 [2024-11-20 13:46:57.691650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:45.833 [2024-11-20 13:46:57.691662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.551 ms 00:27:45.833 [2024-11-20 13:46:57.691672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.833 [2024-11-20 13:46:57.691797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.833 [2024-11-20 13:46:57.691814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:45.833 [2024-11-20 13:46:57.691826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:45.833 [2024-11-20 13:46:57.691837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.833 [2024-11-20 13:46:57.691872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.833 [2024-11-20 13:46:57.691887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:45.833 [2024-11-20 13:46:57.691899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:45.833 [2024-11-20 13:46:57.691910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.833 [2024-11-20 13:46:57.691937] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:45.833 [2024-11-20 13:46:57.696810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.833 [2024-11-20 13:46:57.696846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:45.833 [2024-11-20 13:46:57.696859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.888 ms 00:27:45.833 [2024-11-20 13:46:57.696869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.833 [2024-11-20 13:46:57.696961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.833 [2024-11-20 13:46:57.696975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:45.833 [2024-11-20 13:46:57.696986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:45.833 [2024-11-20 13:46:57.696997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.833 [2024-11-20 13:46:57.697023] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:45.833 [2024-11-20 13:46:57.697052] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:45.833 [2024-11-20 13:46:57.697091] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:45.833 [2024-11-20 13:46:57.697111] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:45.833 [2024-11-20 13:46:57.697201] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:45.833 [2024-11-20 13:46:57.697215] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:45.833 [2024-11-20 13:46:57.697229] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:45.833 [2024-11-20 13:46:57.697244] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:45.833 [2024-11-20 13:46:57.697260] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:45.833 [2024-11-20 13:46:57.697272] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:45.833 [2024-11-20 13:46:57.697283] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:45.833 [2024-11-20 13:46:57.697294] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:45.833 [2024-11-20 13:46:57.697304] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:45.833 [2024-11-20 13:46:57.697316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.833 [2024-11-20 13:46:57.697327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:45.833 [2024-11-20 13:46:57.697337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:27:45.833 [2024-11-20 13:46:57.697348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.833 [2024-11-20 13:46:57.697425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.833 [2024-11-20 13:46:57.697441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:45.833 [2024-11-20 13:46:57.697452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:45.833 [2024-11-20 13:46:57.697462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.833 [2024-11-20 13:46:57.697557] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:45.833 [2024-11-20 13:46:57.697572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:45.833 [2024-11-20 13:46:57.697583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:45.833 [2024-11-20 13:46:57.697594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.833 [2024-11-20 13:46:57.697616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:45.833 [2024-11-20 13:46:57.697625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:45.833 [2024-11-20 13:46:57.697635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:45.833 [2024-11-20 13:46:57.697646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:45.833 [2024-11-20 13:46:57.697657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:45.833 [2024-11-20 13:46:57.697667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:45.833 [2024-11-20 13:46:57.697678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:45.833 [2024-11-20 13:46:57.697691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:45.833 [2024-11-20 13:46:57.697700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:45.833 [2024-11-20 13:46:57.697722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:45.833 [2024-11-20 13:46:57.697732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:45.833 [2024-11-20 13:46:57.697741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.833 [2024-11-20 13:46:57.697751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:45.833 [2024-11-20 13:46:57.697760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:45.833 [2024-11-20 13:46:57.697770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.833 [2024-11-20 13:46:57.697780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:45.833 [2024-11-20 13:46:57.697790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:45.833 [2024-11-20 13:46:57.697802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.833 [2024-11-20 13:46:57.697811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:45.833 [2024-11-20 13:46:57.697821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:45.833 [2024-11-20 13:46:57.697830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.833 [2024-11-20 13:46:57.697839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:45.833 [2024-11-20 13:46:57.697849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:45.833 [2024-11-20 13:46:57.697858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.833 [2024-11-20 13:46:57.697868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:45.833 [2024-11-20 13:46:57.697877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:45.833 [2024-11-20 13:46:57.697886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.834 [2024-11-20 13:46:57.697895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:45.834 [2024-11-20 13:46:57.697904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:45.834 [2024-11-20 13:46:57.697913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:45.834 [2024-11-20 13:46:57.697923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:45.834 [2024-11-20 13:46:57.697931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:45.834 [2024-11-20 13:46:57.697941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:45.834 [2024-11-20 13:46:57.697950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:45.834 [2024-11-20 13:46:57.697960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:45.834 [2024-11-20 13:46:57.697969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.834 [2024-11-20 13:46:57.697978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:45.834 [2024-11-20 13:46:57.697987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:45.834 [2024-11-20 13:46:57.697996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.834 [2024-11-20 13:46:57.698007] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:45.834 [2024-11-20 13:46:57.698016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:45.834 [2024-11-20 13:46:57.698026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:45.834 [2024-11-20 13:46:57.698040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.834 [2024-11-20 13:46:57.698051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:45.834 [2024-11-20 13:46:57.698060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:45.834 [2024-11-20 13:46:57.698070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:45.834 [2024-11-20 13:46:57.698079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:45.834 [2024-11-20 13:46:57.698089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:45.834 [2024-11-20 13:46:57.698098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:45.834 [2024-11-20 13:46:57.698109] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:45.834 [2024-11-20 13:46:57.698121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:45.834 [2024-11-20 13:46:57.698133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:45.834 [2024-11-20 13:46:57.698144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:45.834 [2024-11-20 13:46:57.698154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:45.834 [2024-11-20 13:46:57.698165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:45.834 [2024-11-20 13:46:57.698183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:45.834 [2024-11-20 13:46:57.698194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:45.834 [2024-11-20 13:46:57.698204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:45.834 [2024-11-20 13:46:57.698214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:45.834 [2024-11-20 13:46:57.698225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:45.834 [2024-11-20 13:46:57.698236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:45.834 [2024-11-20 13:46:57.698246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:45.834 [2024-11-20 13:46:57.698256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:45.834 [2024-11-20 13:46:57.698267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:45.834 [2024-11-20 13:46:57.698278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:45.834 [2024-11-20 13:46:57.698288] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:45.834 [2024-11-20 13:46:57.698299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:45.834 [2024-11-20 13:46:57.698310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:45.834 [2024-11-20 13:46:57.698320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:45.834 [2024-11-20 13:46:57.698330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:45.834 [2024-11-20 13:46:57.698341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:45.834 [2024-11-20 13:46:57.698355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.834 [2024-11-20 13:46:57.698366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:45.834 [2024-11-20 13:46:57.698381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:27:45.834 [2024-11-20 13:46:57.698391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.834 [2024-11-20 13:46:57.740643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.834 [2024-11-20 13:46:57.740710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:45.834 [2024-11-20 13:46:57.740728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.257 ms 00:27:45.834 [2024-11-20 13:46:57.740740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.834 [2024-11-20 13:46:57.740924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.834 [2024-11-20 13:46:57.740946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:45.834 [2024-11-20 13:46:57.740958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:45.834 [2024-11-20 13:46:57.740969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.094 [2024-11-20 13:46:57.799275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.094 [2024-11-20 13:46:57.799341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:46.094 [2024-11-20 13:46:57.799358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.370 ms 00:27:46.094 [2024-11-20 13:46:57.799374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.094 [2024-11-20 13:46:57.799531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.094 [2024-11-20 13:46:57.799546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:46.094 [2024-11-20 13:46:57.799559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:46.094 [2024-11-20 13:46:57.799569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.094 [2024-11-20 13:46:57.800067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.094 [2024-11-20 13:46:57.800092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:46.094 [2024-11-20 13:46:57.800104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:27:46.094 [2024-11-20 13:46:57.800123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.094 [2024-11-20 13:46:57.800255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.094 [2024-11-20 13:46:57.800276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:46.094 [2024-11-20 13:46:57.800287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:27:46.094 [2024-11-20 13:46:57.800298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.094 [2024-11-20 13:46:57.821700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.094 [2024-11-20 13:46:57.821768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:46.094 [2024-11-20 13:46:57.821786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.409 ms 00:27:46.094 [2024-11-20 13:46:57.821797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.094 [2024-11-20 13:46:57.842987] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:46.094 [2024-11-20 13:46:57.843067] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:46.094 [2024-11-20 13:46:57.843089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.094 [2024-11-20 13:46:57.843101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:46.094 [2024-11-20 13:46:57.843116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.144 ms 00:27:46.094 [2024-11-20 13:46:57.843127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.094 [2024-11-20 13:46:57.875429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.094 [2024-11-20 13:46:57.875520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:46.094 [2024-11-20 13:46:57.875566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.169 ms 00:27:46.094 [2024-11-20 13:46:57.875578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.094 [2024-11-20 13:46:57.895677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.094 [2024-11-20 13:46:57.895755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:46.094 [2024-11-20 13:46:57.895771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.921 ms 00:27:46.094 [2024-11-20 13:46:57.895783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.094 [2024-11-20 13:46:57.915801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.094 [2024-11-20 13:46:57.915882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:46.094 [2024-11-20 13:46:57.915900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.876 ms 00:27:46.095 [2024-11-20 13:46:57.915910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.095 [2024-11-20 13:46:57.916857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.095 [2024-11-20 13:46:57.916893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:46.095 [2024-11-20 13:46:57.916907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:27:46.095 [2024-11-20 13:46:57.916918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.095 [2024-11-20 13:46:58.009283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.095 [2024-11-20 13:46:58.009359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:46.095 [2024-11-20 13:46:58.009378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.474 ms 00:27:46.095 [2024-11-20 13:46:58.009389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.095 [2024-11-20 13:46:58.023579] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:46.095 [2024-11-20 13:46:58.040510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.095 [2024-11-20 13:46:58.040571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:46.095 [2024-11-20 13:46:58.040588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.980 ms 00:27:46.095 [2024-11-20 13:46:58.040610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.095 [2024-11-20 13:46:58.040755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.095 [2024-11-20 13:46:58.040775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:46.095 [2024-11-20 13:46:58.040786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:46.095 [2024-11-20 13:46:58.040797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.095 [2024-11-20 13:46:58.040857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.095 [2024-11-20 13:46:58.040870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:46.095 [2024-11-20 13:46:58.040881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:46.095 [2024-11-20 13:46:58.040892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.095 [2024-11-20 13:46:58.040931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.095 [2024-11-20 13:46:58.040945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:46.095 [2024-11-20 13:46:58.040960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:46.095 [2024-11-20 13:46:58.040970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.095 [2024-11-20 13:46:58.041010] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:46.095 [2024-11-20 13:46:58.041023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.095 [2024-11-20 13:46:58.041033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:46.095 [2024-11-20 13:46:58.041045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:46.095 [2024-11-20 13:46:58.041055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.354 [2024-11-20 13:46:58.079355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.354 [2024-11-20 13:46:58.079443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:46.354 [2024-11-20 13:46:58.079460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.332 ms 00:27:46.354 [2024-11-20 13:46:58.079472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.354 [2024-11-20 13:46:58.079695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:46.354 [2024-11-20 13:46:58.079712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:46.354 [2024-11-20 13:46:58.079725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:46.354 [2024-11-20 13:46:58.079735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:46.354 [2024-11-20 13:46:58.080759] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:46.354 [2024-11-20 13:46:58.085886] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 424.700 ms, result 0 00:27:46.354 [2024-11-20 13:46:58.086860] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:46.354 [2024-11-20 13:46:58.106005] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:47.291  [2024-11-20T13:47:00.184Z] Copying: 23/256 [MB] (23 MBps) [2024-11-20T13:47:01.147Z] Copying: 46/256 [MB] (22 MBps) [2024-11-20T13:47:02.526Z] Copying: 71/256 [MB] (24 MBps) [2024-11-20T13:47:03.124Z] Copying: 96/256 [MB] (25 MBps) [2024-11-20T13:47:04.498Z] Copying: 122/256 [MB] (26 MBps) [2024-11-20T13:47:05.431Z] Copying: 150/256 [MB] (27 MBps) [2024-11-20T13:47:06.368Z] Copying: 176/256 [MB] (26 MBps) [2024-11-20T13:47:07.304Z] Copying: 201/256 [MB] (24 MBps) [2024-11-20T13:47:08.252Z] Copying: 225/256 [MB] (24 MBps) [2024-11-20T13:47:08.511Z] Copying: 249/256 [MB] (23 MBps) [2024-11-20T13:47:08.511Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-20 13:47:08.361420] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:56.554 [2024-11-20 13:47:08.377312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.554 [2024-11-20 13:47:08.377361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:56.554 [2024-11-20 13:47:08.377378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:56.554 [2024-11-20 13:47:08.377389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.554 [2024-11-20 13:47:08.377421] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:56.554 [2024-11-20 13:47:08.382033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.554 [2024-11-20 13:47:08.382067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:56.554 [2024-11-20 13:47:08.382080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.602 ms 00:27:56.554 [2024-11-20 13:47:08.382091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.554 [2024-11-20 13:47:08.384453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.554 [2024-11-20 13:47:08.384495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:56.554 [2024-11-20 13:47:08.384510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.338 ms 00:27:56.554 [2024-11-20 13:47:08.384521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.554 [2024-11-20 13:47:08.391961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.554 [2024-11-20 13:47:08.392004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:56.554 [2024-11-20 13:47:08.392025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.432 ms 00:27:56.554 [2024-11-20 13:47:08.392035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.554 [2024-11-20 13:47:08.397681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.554 [2024-11-20 13:47:08.397716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:56.554 [2024-11-20 13:47:08.397729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.616 ms 00:27:56.554 [2024-11-20 13:47:08.397740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.554 [2024-11-20 13:47:08.433834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.554 [2024-11-20 13:47:08.433873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:56.554 [2024-11-20 13:47:08.433889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.068 ms 00:27:56.554 [2024-11-20 13:47:08.433899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.554 [2024-11-20 13:47:08.455511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.554 [2024-11-20 13:47:08.455552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:56.554 [2024-11-20 13:47:08.455572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.590 ms 00:27:56.554 [2024-11-20 13:47:08.455588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.554 [2024-11-20 13:47:08.455735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.554 [2024-11-20 13:47:08.455749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:56.554 [2024-11-20 13:47:08.455761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:27:56.554 [2024-11-20 13:47:08.455772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.554 [2024-11-20 13:47:08.493478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.554 [2024-11-20 13:47:08.493519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:56.554 [2024-11-20 13:47:08.493532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.749 ms 00:27:56.554 [2024-11-20 13:47:08.493543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.814 [2024-11-20 13:47:08.530409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.814 [2024-11-20 13:47:08.530446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:56.814 [2024-11-20 13:47:08.530459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.869 ms 00:27:56.814 [2024-11-20 13:47:08.530470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.814 [2024-11-20 13:47:08.566047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.814 [2024-11-20 13:47:08.566085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:56.814 [2024-11-20 13:47:08.566098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.579 ms 00:27:56.814 [2024-11-20 13:47:08.566108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.814 [2024-11-20 13:47:08.602254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.814 [2024-11-20 13:47:08.602291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:56.814 [2024-11-20 13:47:08.602304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.113 ms 00:27:56.814 [2024-11-20 13:47:08.602314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.814 [2024-11-20 13:47:08.602384] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:56.814 [2024-11-20 13:47:08.602409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:56.814 [2024-11-20 13:47:08.602840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.602851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.602862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.602874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.602884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.602894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.602905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.602915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.602925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.602936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.602947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.602957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.602967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.602978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.602988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.602998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:56.815 [2024-11-20 13:47:08.603527] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:56.815 [2024-11-20 13:47:08.603538] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 95612c22-11a0-46d9-b67f-3ffaf6f746c4 00:27:56.815 [2024-11-20 13:47:08.603550] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:56.815 [2024-11-20 13:47:08.603560] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:56.815 [2024-11-20 13:47:08.603570] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:56.815 [2024-11-20 13:47:08.603581] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:56.815 [2024-11-20 13:47:08.603591] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:56.815 [2024-11-20 13:47:08.603610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:56.815 [2024-11-20 13:47:08.603621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:56.815 [2024-11-20 13:47:08.603631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:56.815 [2024-11-20 13:47:08.603641] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:56.815 [2024-11-20 13:47:08.603650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.815 [2024-11-20 13:47:08.603661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:56.815 [2024-11-20 13:47:08.603677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.271 ms 00:27:56.815 [2024-11-20 13:47:08.603688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.815 [2024-11-20 13:47:08.625459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.815 [2024-11-20 13:47:08.625495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:56.815 [2024-11-20 13:47:08.625509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.784 ms 00:27:56.815 [2024-11-20 13:47:08.625520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.815 [2024-11-20 13:47:08.626188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.815 [2024-11-20 13:47:08.626232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:56.815 [2024-11-20 13:47:08.626244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:27:56.815 [2024-11-20 13:47:08.626255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.815 [2024-11-20 13:47:08.686123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.815 [2024-11-20 13:47:08.686160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:56.815 [2024-11-20 13:47:08.686180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.815 [2024-11-20 13:47:08.686192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.815 [2024-11-20 13:47:08.686282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.815 [2024-11-20 13:47:08.686299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:56.815 [2024-11-20 13:47:08.686311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.815 [2024-11-20 13:47:08.686322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.815 [2024-11-20 13:47:08.686373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.815 [2024-11-20 13:47:08.686386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:56.815 [2024-11-20 13:47:08.686398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.815 [2024-11-20 13:47:08.686409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.816 [2024-11-20 13:47:08.686430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:56.816 [2024-11-20 13:47:08.686441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:56.816 [2024-11-20 13:47:08.686456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:56.816 [2024-11-20 13:47:08.686467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.074 [2024-11-20 13:47:08.824840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.074 [2024-11-20 13:47:08.824903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:57.074 [2024-11-20 13:47:08.824920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.074 [2024-11-20 13:47:08.824932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.074 [2024-11-20 13:47:08.938376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.074 [2024-11-20 13:47:08.938453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:57.074 [2024-11-20 13:47:08.938468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.074 [2024-11-20 13:47:08.938480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.074 [2024-11-20 13:47:08.938619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.074 [2024-11-20 13:47:08.938634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:57.075 [2024-11-20 13:47:08.938646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.075 [2024-11-20 13:47:08.938657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.075 [2024-11-20 13:47:08.938691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.075 [2024-11-20 13:47:08.938704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:57.075 [2024-11-20 13:47:08.938715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.075 [2024-11-20 13:47:08.938730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.075 [2024-11-20 13:47:08.938869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.075 [2024-11-20 13:47:08.938884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:57.075 [2024-11-20 13:47:08.938896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.075 [2024-11-20 13:47:08.938907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.075 [2024-11-20 13:47:08.938949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.075 [2024-11-20 13:47:08.938963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:57.075 [2024-11-20 13:47:08.938974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.075 [2024-11-20 13:47:08.938984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.075 [2024-11-20 13:47:08.939037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.075 [2024-11-20 13:47:08.939050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:57.075 [2024-11-20 13:47:08.939062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.075 [2024-11-20 13:47:08.939072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.075 [2024-11-20 13:47:08.939126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:57.075 [2024-11-20 13:47:08.939139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:57.075 [2024-11-20 13:47:08.939150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:57.075 [2024-11-20 13:47:08.939166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.075 [2024-11-20 13:47:08.939341] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 562.928 ms, result 0 00:27:58.451 00:27:58.451 00:27:58.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.451 13:47:10 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78827 00:27:58.451 13:47:10 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78827 00:27:58.451 13:47:10 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78827 ']' 00:27:58.451 13:47:10 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.451 13:47:10 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.451 13:47:10 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.451 13:47:10 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:27:58.451 13:47:10 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.451 13:47:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:27:58.711 [2024-11-20 13:47:10.416821] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:27:58.711 [2024-11-20 13:47:10.416959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78827 ] 00:27:58.711 [2024-11-20 13:47:10.589463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.969 [2024-11-20 13:47:10.728872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.905 13:47:11 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:59.905 13:47:11 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:27:59.905 13:47:11 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:28:00.164 [2024-11-20 13:47:11.985775] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:00.164 [2024-11-20 13:47:11.985850] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:00.430 [2024-11-20 13:47:12.171333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.430 [2024-11-20 13:47:12.171396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:00.430 [2024-11-20 13:47:12.171416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:00.430 [2024-11-20 13:47:12.171428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.430 [2024-11-20 13:47:12.175844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.430 [2024-11-20 13:47:12.175885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:00.430 [2024-11-20 13:47:12.175900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.399 ms 00:28:00.430 [2024-11-20 13:47:12.175911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.430 [2024-11-20 13:47:12.176031] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:00.430 [2024-11-20 13:47:12.177109] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:00.430 [2024-11-20 13:47:12.177145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.430 [2024-11-20 13:47:12.177157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:00.430 [2024-11-20 13:47:12.177172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.128 ms 00:28:00.430 [2024-11-20 13:47:12.177183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.430 [2024-11-20 13:47:12.179697] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:00.430 [2024-11-20 13:47:12.200909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.430 [2024-11-20 13:47:12.200961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:00.430 [2024-11-20 13:47:12.200977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.250 ms 00:28:00.430 [2024-11-20 13:47:12.200995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.430 [2024-11-20 13:47:12.201114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.430 [2024-11-20 13:47:12.201136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:00.430 [2024-11-20 13:47:12.201149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:00.430 [2024-11-20 13:47:12.201165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.430 [2024-11-20 13:47:12.213775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.430 [2024-11-20 13:47:12.213827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:00.430 [2024-11-20 13:47:12.213841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.566 ms 00:28:00.430 [2024-11-20 13:47:12.213858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.430 [2024-11-20 13:47:12.214032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.430 [2024-11-20 13:47:12.214052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:00.430 [2024-11-20 13:47:12.214064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:28:00.430 [2024-11-20 13:47:12.214078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.430 [2024-11-20 13:47:12.214118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.430 [2024-11-20 13:47:12.214133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:00.430 [2024-11-20 13:47:12.214144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:00.430 [2024-11-20 13:47:12.214157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.430 [2024-11-20 13:47:12.214194] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:00.430 [2024-11-20 13:47:12.220338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.430 [2024-11-20 13:47:12.220371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:00.430 [2024-11-20 13:47:12.220393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.157 ms 00:28:00.430 [2024-11-20 13:47:12.220417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.430 [2024-11-20 13:47:12.220486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.430 [2024-11-20 13:47:12.220499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:00.430 [2024-11-20 13:47:12.220517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:00.430 [2024-11-20 13:47:12.220533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.430 [2024-11-20 13:47:12.220564] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:00.430 [2024-11-20 13:47:12.220593] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:00.430 [2024-11-20 13:47:12.220661] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:00.430 [2024-11-20 13:47:12.220682] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:00.430 [2024-11-20 13:47:12.220787] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:00.430 [2024-11-20 13:47:12.220802] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:00.430 [2024-11-20 13:47:12.220830] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:00.430 [2024-11-20 13:47:12.220845] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:00.430 [2024-11-20 13:47:12.220864] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:00.430 [2024-11-20 13:47:12.220876] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:00.430 [2024-11-20 13:47:12.220893] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:00.430 [2024-11-20 13:47:12.220903] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:00.430 [2024-11-20 13:47:12.220926] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:00.430 [2024-11-20 13:47:12.220938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.430 [2024-11-20 13:47:12.220954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:00.430 [2024-11-20 13:47:12.220966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:28:00.430 [2024-11-20 13:47:12.220981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.430 [2024-11-20 13:47:12.221067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.430 [2024-11-20 13:47:12.221084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:00.430 [2024-11-20 13:47:12.221095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:00.430 [2024-11-20 13:47:12.221111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.430 [2024-11-20 13:47:12.221207] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:00.431 [2024-11-20 13:47:12.221227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:00.431 [2024-11-20 13:47:12.221239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:00.431 [2024-11-20 13:47:12.221255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:00.431 [2024-11-20 13:47:12.221266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:00.431 [2024-11-20 13:47:12.221282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:00.431 [2024-11-20 13:47:12.221293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:00.431 [2024-11-20 13:47:12.221316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:00.431 [2024-11-20 13:47:12.221325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:00.431 [2024-11-20 13:47:12.221341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:00.431 [2024-11-20 13:47:12.221351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:00.431 [2024-11-20 13:47:12.221367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:00.431 [2024-11-20 13:47:12.221377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:00.431 [2024-11-20 13:47:12.221395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:00.431 [2024-11-20 13:47:12.221405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:00.431 [2024-11-20 13:47:12.221421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:00.431 [2024-11-20 13:47:12.221431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:00.431 [2024-11-20 13:47:12.221446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:00.431 [2024-11-20 13:47:12.221457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:00.431 [2024-11-20 13:47:12.221473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:00.431 [2024-11-20 13:47:12.221496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:00.431 [2024-11-20 13:47:12.221512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:00.431 [2024-11-20 13:47:12.221522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:00.431 [2024-11-20 13:47:12.221543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:00.431 [2024-11-20 13:47:12.221553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:00.431 [2024-11-20 13:47:12.221569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:00.431 [2024-11-20 13:47:12.221579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:00.431 [2024-11-20 13:47:12.221595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:00.431 [2024-11-20 13:47:12.221616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:00.431 [2024-11-20 13:47:12.221631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:00.431 [2024-11-20 13:47:12.221642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:00.431 [2024-11-20 13:47:12.221659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:00.431 [2024-11-20 13:47:12.221669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:00.431 [2024-11-20 13:47:12.221685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:00.431 [2024-11-20 13:47:12.221695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:00.431 [2024-11-20 13:47:12.221710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:00.431 [2024-11-20 13:47:12.221720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:00.431 [2024-11-20 13:47:12.221736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:00.431 [2024-11-20 13:47:12.221745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:00.431 [2024-11-20 13:47:12.221766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:00.431 [2024-11-20 13:47:12.221776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:00.431 [2024-11-20 13:47:12.221792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:00.431 [2024-11-20 13:47:12.221802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:00.431 [2024-11-20 13:47:12.221817] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:00.431 [2024-11-20 13:47:12.221835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:00.431 [2024-11-20 13:47:12.221851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:00.431 [2024-11-20 13:47:12.221862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:00.431 [2024-11-20 13:47:12.221879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:00.431 [2024-11-20 13:47:12.221889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:00.431 [2024-11-20 13:47:12.221905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:00.431 [2024-11-20 13:47:12.221915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:00.431 [2024-11-20 13:47:12.221930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:00.431 [2024-11-20 13:47:12.221940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:00.431 [2024-11-20 13:47:12.221958] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:00.431 [2024-11-20 13:47:12.221971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:00.431 [2024-11-20 13:47:12.221994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:00.431 [2024-11-20 13:47:12.222006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:00.431 [2024-11-20 13:47:12.222022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:00.431 [2024-11-20 13:47:12.222034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:00.431 [2024-11-20 13:47:12.222051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:00.431 [2024-11-20 13:47:12.222062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:00.431 [2024-11-20 13:47:12.222079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:00.431 [2024-11-20 13:47:12.222090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:00.431 [2024-11-20 13:47:12.222107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:00.431 [2024-11-20 13:47:12.222118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:00.431 [2024-11-20 13:47:12.222134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:00.431 [2024-11-20 13:47:12.222145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:00.431 [2024-11-20 13:47:12.222187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:00.431 [2024-11-20 13:47:12.222200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:00.431 [2024-11-20 13:47:12.222224] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:00.431 [2024-11-20 13:47:12.222237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:00.431 [2024-11-20 13:47:12.222262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:00.432 [2024-11-20 13:47:12.222274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:00.432 [2024-11-20 13:47:12.222292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:00.432 [2024-11-20 13:47:12.222305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:00.432 [2024-11-20 13:47:12.222323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.432 [2024-11-20 13:47:12.222335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:00.432 [2024-11-20 13:47:12.222355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:28:00.432 [2024-11-20 13:47:12.222366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.432 [2024-11-20 13:47:12.274847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.432 [2024-11-20 13:47:12.274909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:00.432 [2024-11-20 13:47:12.274930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.468 ms 00:28:00.432 [2024-11-20 13:47:12.274945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.432 [2024-11-20 13:47:12.275171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.432 [2024-11-20 13:47:12.275185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:00.432 [2024-11-20 13:47:12.275202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:28:00.432 [2024-11-20 13:47:12.275212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.432 [2024-11-20 13:47:12.333729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.432 [2024-11-20 13:47:12.333793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:00.432 [2024-11-20 13:47:12.333815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.572 ms 00:28:00.432 [2024-11-20 13:47:12.333827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.432 [2024-11-20 13:47:12.333960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.432 [2024-11-20 13:47:12.333973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:00.432 [2024-11-20 13:47:12.333992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:00.432 [2024-11-20 13:47:12.334004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.432 [2024-11-20 13:47:12.334824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.432 [2024-11-20 13:47:12.334846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:00.432 [2024-11-20 13:47:12.334885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:28:00.432 [2024-11-20 13:47:12.334897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.432 [2024-11-20 13:47:12.335062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.432 [2024-11-20 13:47:12.335076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:00.432 [2024-11-20 13:47:12.335091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:28:00.432 [2024-11-20 13:47:12.335102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.432 [2024-11-20 13:47:12.364382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.432 [2024-11-20 13:47:12.364437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:00.432 [2024-11-20 13:47:12.364458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.291 ms 00:28:00.432 [2024-11-20 13:47:12.364470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.690 [2024-11-20 13:47:12.399753] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:28:00.690 [2024-11-20 13:47:12.399808] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:00.690 [2024-11-20 13:47:12.399831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.690 [2024-11-20 13:47:12.399844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:00.690 [2024-11-20 13:47:12.399863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.226 ms 00:28:00.690 [2024-11-20 13:47:12.399875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.690 [2024-11-20 13:47:12.432341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.690 [2024-11-20 13:47:12.432399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:00.690 [2024-11-20 13:47:12.432423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.400 ms 00:28:00.690 [2024-11-20 13:47:12.432436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.690 [2024-11-20 13:47:12.452038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.690 [2024-11-20 13:47:12.452081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:00.690 [2024-11-20 13:47:12.452107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.508 ms 00:28:00.690 [2024-11-20 13:47:12.452118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.690 [2024-11-20 13:47:12.471299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.690 [2024-11-20 13:47:12.471340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:00.690 [2024-11-20 13:47:12.471361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.115 ms 00:28:00.690 [2024-11-20 13:47:12.471371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.690 [2024-11-20 13:47:12.472287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.690 [2024-11-20 13:47:12.472322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:00.690 [2024-11-20 13:47:12.472343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:28:00.690 [2024-11-20 13:47:12.472354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.690 [2024-11-20 13:47:12.575042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.690 [2024-11-20 13:47:12.575113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:00.690 [2024-11-20 13:47:12.575140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.813 ms 00:28:00.690 [2024-11-20 13:47:12.575154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.690 [2024-11-20 13:47:12.587490] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:00.690 [2024-11-20 13:47:12.614446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.690 [2024-11-20 13:47:12.614525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:00.690 [2024-11-20 13:47:12.614564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.194 ms 00:28:00.690 [2024-11-20 13:47:12.614582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.690 [2024-11-20 13:47:12.614764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.690 [2024-11-20 13:47:12.614788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:00.690 [2024-11-20 13:47:12.614801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:00.690 [2024-11-20 13:47:12.614819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.690 [2024-11-20 13:47:12.614893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.690 [2024-11-20 13:47:12.614913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:00.690 [2024-11-20 13:47:12.614925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:28:00.690 [2024-11-20 13:47:12.614953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.690 [2024-11-20 13:47:12.614984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.690 [2024-11-20 13:47:12.615002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:00.690 [2024-11-20 13:47:12.615014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:00.690 [2024-11-20 13:47:12.615031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.690 [2024-11-20 13:47:12.615096] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:00.690 [2024-11-20 13:47:12.615121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.690 [2024-11-20 13:47:12.615132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:00.690 [2024-11-20 13:47:12.615156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:00.690 [2024-11-20 13:47:12.615167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.947 [2024-11-20 13:47:12.653799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.947 [2024-11-20 13:47:12.653841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:00.947 [2024-11-20 13:47:12.653864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.632 ms 00:28:00.947 [2024-11-20 13:47:12.653876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.947 [2024-11-20 13:47:12.654016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.947 [2024-11-20 13:47:12.654031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:00.947 [2024-11-20 13:47:12.654049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:28:00.947 [2024-11-20 13:47:12.654066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.947 [2024-11-20 13:47:12.655441] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:00.947 [2024-11-20 13:47:12.660014] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 484.479 ms, result 0 00:28:00.947 [2024-11-20 13:47:12.661343] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:00.947 Some configs were skipped because the RPC state that can call them passed over. 00:28:00.947 13:47:12 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:28:01.205 [2024-11-20 13:47:12.921191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.205 [2024-11-20 13:47:12.921278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:28:01.205 [2024-11-20 13:47:12.921296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.648 ms 00:28:01.205 [2024-11-20 13:47:12.921315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.205 [2024-11-20 13:47:12.921362] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.829 ms, result 0 00:28:01.205 true 00:28:01.205 13:47:12 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:28:01.205 [2024-11-20 13:47:13.120908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.205 [2024-11-20 13:47:13.120969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:28:01.205 [2024-11-20 13:47:13.120993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.443 ms 00:28:01.205 [2024-11-20 13:47:13.121006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.205 [2024-11-20 13:47:13.121063] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.601 ms, result 0 00:28:01.205 true 00:28:01.205 13:47:13 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78827 00:28:01.205 13:47:13 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78827 ']' 00:28:01.205 13:47:13 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78827 00:28:01.205 13:47:13 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:28:01.205 13:47:13 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:01.205 13:47:13 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78827 00:28:01.464 killing process with pid 78827 00:28:01.464 13:47:13 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:01.464 13:47:13 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:01.464 13:47:13 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78827' 00:28:01.464 13:47:13 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78827 00:28:01.464 13:47:13 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78827 00:28:02.845 [2024-11-20 13:47:14.477209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.845 [2024-11-20 13:47:14.477291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:02.845 [2024-11-20 13:47:14.477308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:02.845 [2024-11-20 13:47:14.477322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.845 [2024-11-20 13:47:14.477351] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:02.845 [2024-11-20 13:47:14.482167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.845 [2024-11-20 13:47:14.482208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:02.845 [2024-11-20 13:47:14.482227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.801 ms 00:28:02.845 [2024-11-20 13:47:14.482238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.845 [2024-11-20 13:47:14.482527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.845 [2024-11-20 13:47:14.482542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:02.845 [2024-11-20 13:47:14.482555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:28:02.845 [2024-11-20 13:47:14.482565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.845 [2024-11-20 13:47:14.486054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.845 [2024-11-20 13:47:14.486093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:02.845 [2024-11-20 13:47:14.486111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.469 ms 00:28:02.845 [2024-11-20 13:47:14.486122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.845 [2024-11-20 13:47:14.491911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.845 [2024-11-20 13:47:14.491950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:02.845 [2024-11-20 13:47:14.491966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.753 ms 00:28:02.845 [2024-11-20 13:47:14.491977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.845 [2024-11-20 13:47:14.507568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.845 [2024-11-20 13:47:14.507609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:02.845 [2024-11-20 13:47:14.507629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.548 ms 00:28:02.845 [2024-11-20 13:47:14.507651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.845 [2024-11-20 13:47:14.519272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.845 [2024-11-20 13:47:14.519316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:02.845 [2024-11-20 13:47:14.519333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.560 ms 00:28:02.845 [2024-11-20 13:47:14.519345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.845 [2024-11-20 13:47:14.519502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.845 [2024-11-20 13:47:14.519516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:02.845 [2024-11-20 13:47:14.519530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:28:02.845 [2024-11-20 13:47:14.519540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.845 [2024-11-20 13:47:14.535319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.845 [2024-11-20 13:47:14.535354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:02.845 [2024-11-20 13:47:14.535371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.779 ms 00:28:02.845 [2024-11-20 13:47:14.535381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.845 [2024-11-20 13:47:14.551430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.845 [2024-11-20 13:47:14.551466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:02.845 [2024-11-20 13:47:14.551503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.014 ms 00:28:02.845 [2024-11-20 13:47:14.551513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.845 [2024-11-20 13:47:14.566828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.845 [2024-11-20 13:47:14.566864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:02.845 [2024-11-20 13:47:14.566887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.277 ms 00:28:02.845 [2024-11-20 13:47:14.566897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.845 [2024-11-20 13:47:14.581536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.845 [2024-11-20 13:47:14.581574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:02.845 [2024-11-20 13:47:14.581593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.568 ms 00:28:02.845 [2024-11-20 13:47:14.581612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.845 [2024-11-20 13:47:14.581669] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:02.845 [2024-11-20 13:47:14.581688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.581707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.581719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.581736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.581747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.581771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.581783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.581801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.581813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.581830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.581842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.581875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.581887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.581905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.581917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.581935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.581947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.581966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.581978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.581998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.582012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.582036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.582048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.582066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.582078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.582096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.582108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.582126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.582138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:02.845 [2024-11-20 13:47:14.582156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.582992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.583003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.583021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.583032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.583056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.583069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.583086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.583098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.583117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.583129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.583147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.583159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.583178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.583191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.583209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.583222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.583240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.583252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.583271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:02.846 [2024-11-20 13:47:14.583291] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:02.846 [2024-11-20 13:47:14.583323] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 95612c22-11a0-46d9-b67f-3ffaf6f746c4 00:28:02.846 [2024-11-20 13:47:14.583351] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:02.846 [2024-11-20 13:47:14.583376] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:02.846 [2024-11-20 13:47:14.583387] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:02.846 [2024-11-20 13:47:14.583404] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:02.846 [2024-11-20 13:47:14.583414] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:02.846 [2024-11-20 13:47:14.583432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:02.846 [2024-11-20 13:47:14.583443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:02.846 [2024-11-20 13:47:14.583458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:02.846 [2024-11-20 13:47:14.583468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:02.846 [2024-11-20 13:47:14.583484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.846 [2024-11-20 13:47:14.583496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:02.846 [2024-11-20 13:47:14.583513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.822 ms 00:28:02.846 [2024-11-20 13:47:14.583524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.846 [2024-11-20 13:47:14.605703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.846 [2024-11-20 13:47:14.605740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:02.846 [2024-11-20 13:47:14.605765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.172 ms 00:28:02.846 [2024-11-20 13:47:14.605776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.846 [2024-11-20 13:47:14.606508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:02.846 [2024-11-20 13:47:14.606536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:02.846 [2024-11-20 13:47:14.606554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:28:02.846 [2024-11-20 13:47:14.606572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.846 [2024-11-20 13:47:14.682962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.847 [2024-11-20 13:47:14.683026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:02.847 [2024-11-20 13:47:14.683047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.847 [2024-11-20 13:47:14.683059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.847 [2024-11-20 13:47:14.683242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.847 [2024-11-20 13:47:14.683270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:02.847 [2024-11-20 13:47:14.683287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.847 [2024-11-20 13:47:14.683305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.847 [2024-11-20 13:47:14.683371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.847 [2024-11-20 13:47:14.683385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:02.847 [2024-11-20 13:47:14.683407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.847 [2024-11-20 13:47:14.683417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:02.847 [2024-11-20 13:47:14.683445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:02.847 [2024-11-20 13:47:14.683456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:02.847 [2024-11-20 13:47:14.683472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:02.847 [2024-11-20 13:47:14.683483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.106 [2024-11-20 13:47:14.822195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.106 [2024-11-20 13:47:14.822284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:03.106 [2024-11-20 13:47:14.822305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.106 [2024-11-20 13:47:14.822318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.106 [2024-11-20 13:47:14.932392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.106 [2024-11-20 13:47:14.932465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:03.106 [2024-11-20 13:47:14.932488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.106 [2024-11-20 13:47:14.932507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.106 [2024-11-20 13:47:14.932671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.106 [2024-11-20 13:47:14.932687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:03.106 [2024-11-20 13:47:14.932710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.106 [2024-11-20 13:47:14.932721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.106 [2024-11-20 13:47:14.932761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.106 [2024-11-20 13:47:14.932773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:03.106 [2024-11-20 13:47:14.932789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.106 [2024-11-20 13:47:14.932800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.106 [2024-11-20 13:47:14.932934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.106 [2024-11-20 13:47:14.932947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:03.106 [2024-11-20 13:47:14.932965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.106 [2024-11-20 13:47:14.932976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.106 [2024-11-20 13:47:14.933024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.106 [2024-11-20 13:47:14.933038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:03.106 [2024-11-20 13:47:14.933055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.106 [2024-11-20 13:47:14.933065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.106 [2024-11-20 13:47:14.933125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.106 [2024-11-20 13:47:14.933137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:03.106 [2024-11-20 13:47:14.933160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.106 [2024-11-20 13:47:14.933172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.106 [2024-11-20 13:47:14.933232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.106 [2024-11-20 13:47:14.933245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:03.106 [2024-11-20 13:47:14.933261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.106 [2024-11-20 13:47:14.933272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.106 [2024-11-20 13:47:14.933453] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 456.944 ms, result 0 00:28:04.486 13:47:16 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:28:04.486 13:47:16 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:04.486 [2024-11-20 13:47:16.236840] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:28:04.486 [2024-11-20 13:47:16.237001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78898 ] 00:28:04.486 [2024-11-20 13:47:16.428791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.745 [2024-11-20 13:47:16.574672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.315 [2024-11-20 13:47:17.006746] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:05.315 [2024-11-20 13:47:17.006822] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:05.315 [2024-11-20 13:47:17.173910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.315 [2024-11-20 13:47:17.173987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:05.315 [2024-11-20 13:47:17.174004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:05.315 [2024-11-20 13:47:17.174031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.315 [2024-11-20 13:47:17.177571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.315 [2024-11-20 13:47:17.177624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:05.315 [2024-11-20 13:47:17.177638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.522 ms 00:28:05.315 [2024-11-20 13:47:17.177650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.315 [2024-11-20 13:47:17.177769] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:05.315 [2024-11-20 13:47:17.178797] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:05.315 [2024-11-20 13:47:17.178836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.315 [2024-11-20 13:47:17.178848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:05.315 [2024-11-20 13:47:17.178861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:28:05.315 [2024-11-20 13:47:17.178872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.315 [2024-11-20 13:47:17.181452] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:05.315 [2024-11-20 13:47:17.202228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.315 [2024-11-20 13:47:17.202275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:05.315 [2024-11-20 13:47:17.202290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.810 ms 00:28:05.315 [2024-11-20 13:47:17.202301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.315 [2024-11-20 13:47:17.202407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.315 [2024-11-20 13:47:17.202422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:05.315 [2024-11-20 13:47:17.202434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:28:05.315 [2024-11-20 13:47:17.202445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.315 [2024-11-20 13:47:17.214620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.315 [2024-11-20 13:47:17.214654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:05.315 [2024-11-20 13:47:17.214668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.150 ms 00:28:05.315 [2024-11-20 13:47:17.214679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.315 [2024-11-20 13:47:17.214805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.315 [2024-11-20 13:47:17.214821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:05.315 [2024-11-20 13:47:17.214834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:05.315 [2024-11-20 13:47:17.214845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.315 [2024-11-20 13:47:17.214876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.315 [2024-11-20 13:47:17.214892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:05.315 [2024-11-20 13:47:17.214903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:05.315 [2024-11-20 13:47:17.214915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.315 [2024-11-20 13:47:17.214942] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:05.315 [2024-11-20 13:47:17.220584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.315 [2024-11-20 13:47:17.220630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:05.315 [2024-11-20 13:47:17.220644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.660 ms 00:28:05.315 [2024-11-20 13:47:17.220655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.315 [2024-11-20 13:47:17.220707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.315 [2024-11-20 13:47:17.220721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:05.315 [2024-11-20 13:47:17.220733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:05.315 [2024-11-20 13:47:17.220744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.315 [2024-11-20 13:47:17.220765] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:05.315 [2024-11-20 13:47:17.220795] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:05.315 [2024-11-20 13:47:17.220835] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:05.315 [2024-11-20 13:47:17.220854] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:05.315 [2024-11-20 13:47:17.220951] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:05.315 [2024-11-20 13:47:17.220964] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:05.315 [2024-11-20 13:47:17.220978] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:05.315 [2024-11-20 13:47:17.220993] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:05.315 [2024-11-20 13:47:17.221010] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:05.315 [2024-11-20 13:47:17.221022] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:05.315 [2024-11-20 13:47:17.221033] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:05.315 [2024-11-20 13:47:17.221043] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:05.315 [2024-11-20 13:47:17.221054] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:05.315 [2024-11-20 13:47:17.221066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.315 [2024-11-20 13:47:17.221077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:05.315 [2024-11-20 13:47:17.221088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:28:05.315 [2024-11-20 13:47:17.221100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.315 [2024-11-20 13:47:17.221177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.315 [2024-11-20 13:47:17.221195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:05.315 [2024-11-20 13:47:17.221206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:05.315 [2024-11-20 13:47:17.221217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.315 [2024-11-20 13:47:17.221311] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:05.315 [2024-11-20 13:47:17.221325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:05.315 [2024-11-20 13:47:17.221337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:05.315 [2024-11-20 13:47:17.221348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.315 [2024-11-20 13:47:17.221361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:05.315 [2024-11-20 13:47:17.221371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:05.315 [2024-11-20 13:47:17.221384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:05.315 [2024-11-20 13:47:17.221394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:05.315 [2024-11-20 13:47:17.221405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:05.315 [2024-11-20 13:47:17.221415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:05.315 [2024-11-20 13:47:17.221426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:05.315 [2024-11-20 13:47:17.221439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:05.315 [2024-11-20 13:47:17.221449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:05.315 [2024-11-20 13:47:17.221471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:05.315 [2024-11-20 13:47:17.221481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:05.315 [2024-11-20 13:47:17.221491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.315 [2024-11-20 13:47:17.221501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:05.315 [2024-11-20 13:47:17.221511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:05.315 [2024-11-20 13:47:17.221521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.315 [2024-11-20 13:47:17.221531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:05.315 [2024-11-20 13:47:17.221541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:05.315 [2024-11-20 13:47:17.221551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.315 [2024-11-20 13:47:17.221560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:05.315 [2024-11-20 13:47:17.221570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:05.316 [2024-11-20 13:47:17.221580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.316 [2024-11-20 13:47:17.221589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:05.316 [2024-11-20 13:47:17.221611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:05.316 [2024-11-20 13:47:17.221622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.316 [2024-11-20 13:47:17.221631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:05.316 [2024-11-20 13:47:17.221641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:05.316 [2024-11-20 13:47:17.221651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:05.316 [2024-11-20 13:47:17.221661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:05.316 [2024-11-20 13:47:17.221671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:05.316 [2024-11-20 13:47:17.221681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:05.316 [2024-11-20 13:47:17.221691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:05.316 [2024-11-20 13:47:17.221702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:05.316 [2024-11-20 13:47:17.221711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:05.316 [2024-11-20 13:47:17.221721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:05.316 [2024-11-20 13:47:17.221731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:05.316 [2024-11-20 13:47:17.221740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.316 [2024-11-20 13:47:17.221750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:05.316 [2024-11-20 13:47:17.221759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:05.316 [2024-11-20 13:47:17.221768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.316 [2024-11-20 13:47:17.221779] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:05.316 [2024-11-20 13:47:17.221791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:05.316 [2024-11-20 13:47:17.221802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:05.316 [2024-11-20 13:47:17.221817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:05.316 [2024-11-20 13:47:17.221828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:05.316 [2024-11-20 13:47:17.221838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:05.316 [2024-11-20 13:47:17.221848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:05.316 [2024-11-20 13:47:17.221858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:05.316 [2024-11-20 13:47:17.221868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:05.316 [2024-11-20 13:47:17.221878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:05.316 [2024-11-20 13:47:17.221889] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:05.316 [2024-11-20 13:47:17.221902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:05.316 [2024-11-20 13:47:17.221917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:05.316 [2024-11-20 13:47:17.221929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:05.316 [2024-11-20 13:47:17.221940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:05.316 [2024-11-20 13:47:17.221951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:05.316 [2024-11-20 13:47:17.221963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:05.316 [2024-11-20 13:47:17.221974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:05.316 [2024-11-20 13:47:17.221985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:05.316 [2024-11-20 13:47:17.221995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:05.316 [2024-11-20 13:47:17.222006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:05.316 [2024-11-20 13:47:17.222017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:05.316 [2024-11-20 13:47:17.222029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:05.316 [2024-11-20 13:47:17.222040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:05.316 [2024-11-20 13:47:17.222050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:05.316 [2024-11-20 13:47:17.222061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:05.316 [2024-11-20 13:47:17.222071] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:05.316 [2024-11-20 13:47:17.222083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:05.316 [2024-11-20 13:47:17.222095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:05.316 [2024-11-20 13:47:17.222105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:05.316 [2024-11-20 13:47:17.222116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:05.316 [2024-11-20 13:47:17.222126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:05.316 [2024-11-20 13:47:17.222140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.316 [2024-11-20 13:47:17.222151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:05.316 [2024-11-20 13:47:17.222167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.884 ms 00:28:05.316 [2024-11-20 13:47:17.222188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.577 [2024-11-20 13:47:17.272777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.577 [2024-11-20 13:47:17.272843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:05.577 [2024-11-20 13:47:17.272861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.608 ms 00:28:05.577 [2024-11-20 13:47:17.272873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.577 [2024-11-20 13:47:17.273117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.577 [2024-11-20 13:47:17.273135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:05.577 [2024-11-20 13:47:17.273147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:28:05.577 [2024-11-20 13:47:17.273158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.577 [2024-11-20 13:47:17.342903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.577 [2024-11-20 13:47:17.342967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:05.577 [2024-11-20 13:47:17.342987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.826 ms 00:28:05.577 [2024-11-20 13:47:17.343000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.577 [2024-11-20 13:47:17.343138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.577 [2024-11-20 13:47:17.343153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:05.577 [2024-11-20 13:47:17.343165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:05.577 [2024-11-20 13:47:17.343176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.577 [2024-11-20 13:47:17.344081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.577 [2024-11-20 13:47:17.344122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:05.577 [2024-11-20 13:47:17.344139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.878 ms 00:28:05.577 [2024-11-20 13:47:17.344162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.577 [2024-11-20 13:47:17.344339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.577 [2024-11-20 13:47:17.344356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:05.577 [2024-11-20 13:47:17.344371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:28:05.577 [2024-11-20 13:47:17.344385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.577 [2024-11-20 13:47:17.369853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.577 [2024-11-20 13:47:17.369897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:05.577 [2024-11-20 13:47:17.369917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.478 ms 00:28:05.577 [2024-11-20 13:47:17.369931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.577 [2024-11-20 13:47:17.390168] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:28:05.577 [2024-11-20 13:47:17.390220] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:05.577 [2024-11-20 13:47:17.390238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.577 [2024-11-20 13:47:17.390250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:05.577 [2024-11-20 13:47:17.390263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.170 ms 00:28:05.577 [2024-11-20 13:47:17.390274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.577 [2024-11-20 13:47:17.421360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.577 [2024-11-20 13:47:17.421429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:05.577 [2024-11-20 13:47:17.421449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.031 ms 00:28:05.577 [2024-11-20 13:47:17.421464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.577 [2024-11-20 13:47:17.441905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.577 [2024-11-20 13:47:17.441952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:05.577 [2024-11-20 13:47:17.441969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.311 ms 00:28:05.577 [2024-11-20 13:47:17.441981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.577 [2024-11-20 13:47:17.462984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.577 [2024-11-20 13:47:17.463047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:05.577 [2024-11-20 13:47:17.463063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.929 ms 00:28:05.577 [2024-11-20 13:47:17.463075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.577 [2024-11-20 13:47:17.463998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.577 [2024-11-20 13:47:17.464033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:05.577 [2024-11-20 13:47:17.464048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:28:05.577 [2024-11-20 13:47:17.464060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.837 [2024-11-20 13:47:17.563621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.837 [2024-11-20 13:47:17.563696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:05.837 [2024-11-20 13:47:17.563715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.686 ms 00:28:05.837 [2024-11-20 13:47:17.563728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.837 [2024-11-20 13:47:17.575341] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:05.837 [2024-11-20 13:47:17.602105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.837 [2024-11-20 13:47:17.602184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:05.837 [2024-11-20 13:47:17.602206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.269 ms 00:28:05.837 [2024-11-20 13:47:17.602228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.837 [2024-11-20 13:47:17.602427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.837 [2024-11-20 13:47:17.602445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:05.837 [2024-11-20 13:47:17.602460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:05.837 [2024-11-20 13:47:17.602474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.837 [2024-11-20 13:47:17.602553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.837 [2024-11-20 13:47:17.602575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:05.837 [2024-11-20 13:47:17.602590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:28:05.837 [2024-11-20 13:47:17.602621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.837 [2024-11-20 13:47:17.602677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.837 [2024-11-20 13:47:17.602696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:05.837 [2024-11-20 13:47:17.602710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:05.837 [2024-11-20 13:47:17.602724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.837 [2024-11-20 13:47:17.602773] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:05.837 [2024-11-20 13:47:17.602788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.837 [2024-11-20 13:47:17.602802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:05.837 [2024-11-20 13:47:17.602817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:05.837 [2024-11-20 13:47:17.602833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.837 [2024-11-20 13:47:17.641102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.837 [2024-11-20 13:47:17.641168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:05.837 [2024-11-20 13:47:17.641185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.291 ms 00:28:05.837 [2024-11-20 13:47:17.641198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.837 [2024-11-20 13:47:17.641377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:05.837 [2024-11-20 13:47:17.641394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:05.838 [2024-11-20 13:47:17.641407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:05.838 [2024-11-20 13:47:17.641419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:05.838 [2024-11-20 13:47:17.642883] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:05.838 [2024-11-20 13:47:17.649027] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 469.354 ms, result 0 00:28:05.838 [2024-11-20 13:47:17.649970] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:05.838 [2024-11-20 13:47:17.670547] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:06.773  [2024-11-20T13:47:20.107Z] Copying: 27/256 [MB] (27 MBps) [2024-11-20T13:47:20.739Z] Copying: 51/256 [MB] (23 MBps) [2024-11-20T13:47:21.676Z] Copying: 75/256 [MB] (23 MBps) [2024-11-20T13:47:23.056Z] Copying: 99/256 [MB] (24 MBps) [2024-11-20T13:47:23.992Z] Copying: 124/256 [MB] (25 MBps) [2024-11-20T13:47:24.927Z] Copying: 149/256 [MB] (24 MBps) [2024-11-20T13:47:25.862Z] Copying: 175/256 [MB] (26 MBps) [2024-11-20T13:47:26.797Z] Copying: 200/256 [MB] (24 MBps) [2024-11-20T13:47:27.730Z] Copying: 227/256 [MB] (27 MBps) [2024-11-20T13:47:27.989Z] Copying: 253/256 [MB] (26 MBps) [2024-11-20T13:47:27.989Z] Copying: 256/256 [MB] (average 25 MBps)[2024-11-20 13:47:27.738045] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:16.032 [2024-11-20 13:47:27.753310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.032 [2024-11-20 13:47:27.753390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:16.032 [2024-11-20 13:47:27.753409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:16.032 [2024-11-20 13:47:27.753432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.032 [2024-11-20 13:47:27.753463] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:16.032 [2024-11-20 13:47:27.757735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.032 [2024-11-20 13:47:27.757789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:16.032 [2024-11-20 13:47:27.757808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.256 ms 00:28:16.032 [2024-11-20 13:47:27.757823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.032 [2024-11-20 13:47:27.758121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.032 [2024-11-20 13:47:27.758143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:16.032 [2024-11-20 13:47:27.758186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:28:16.032 [2024-11-20 13:47:27.758202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.032 [2024-11-20 13:47:27.761289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.032 [2024-11-20 13:47:27.761325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:16.032 [2024-11-20 13:47:27.761339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.069 ms 00:28:16.032 [2024-11-20 13:47:27.761350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.032 [2024-11-20 13:47:27.767440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.032 [2024-11-20 13:47:27.767510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:16.032 [2024-11-20 13:47:27.767527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.072 ms 00:28:16.032 [2024-11-20 13:47:27.767538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.032 [2024-11-20 13:47:27.806388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.032 [2024-11-20 13:47:27.806479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:16.032 [2024-11-20 13:47:27.806499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.793 ms 00:28:16.032 [2024-11-20 13:47:27.806510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.032 [2024-11-20 13:47:27.829325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.032 [2024-11-20 13:47:27.829423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:16.032 [2024-11-20 13:47:27.829449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.683 ms 00:28:16.032 [2024-11-20 13:47:27.829461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.032 [2024-11-20 13:47:27.829702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.032 [2024-11-20 13:47:27.829725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:16.032 [2024-11-20 13:47:27.829740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:28:16.032 [2024-11-20 13:47:27.829755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.032 [2024-11-20 13:47:27.870663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.032 [2024-11-20 13:47:27.870756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:16.032 [2024-11-20 13:47:27.870775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.929 ms 00:28:16.032 [2024-11-20 13:47:27.870786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.032 [2024-11-20 13:47:27.908870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.032 [2024-11-20 13:47:27.908943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:16.032 [2024-11-20 13:47:27.908961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.025 ms 00:28:16.032 [2024-11-20 13:47:27.908973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.032 [2024-11-20 13:47:27.947925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.032 [2024-11-20 13:47:27.948006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:16.032 [2024-11-20 13:47:27.948024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.904 ms 00:28:16.032 [2024-11-20 13:47:27.948036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.032 [2024-11-20 13:47:27.986266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.032 [2024-11-20 13:47:27.986345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:16.032 [2024-11-20 13:47:27.986365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.127 ms 00:28:16.032 [2024-11-20 13:47:27.986379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.032 [2024-11-20 13:47:27.986500] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:16.032 [2024-11-20 13:47:27.986527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.986992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:16.032 [2024-11-20 13:47:27.987004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:16.033 [2024-11-20 13:47:27.987475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:16.292 [2024-11-20 13:47:27.987850] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:16.292 [2024-11-20 13:47:27.987862] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 95612c22-11a0-46d9-b67f-3ffaf6f746c4 00:28:16.292 [2024-11-20 13:47:27.987875] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:16.292 [2024-11-20 13:47:27.987886] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:16.292 [2024-11-20 13:47:27.987897] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:16.292 [2024-11-20 13:47:27.987909] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:16.292 [2024-11-20 13:47:27.987920] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:16.292 [2024-11-20 13:47:27.987931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:16.292 [2024-11-20 13:47:27.987943] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:16.292 [2024-11-20 13:47:27.987953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:16.292 [2024-11-20 13:47:27.987963] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:16.292 [2024-11-20 13:47:27.987975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.292 [2024-11-20 13:47:27.987995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:16.292 [2024-11-20 13:47:27.988007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.479 ms 00:28:16.292 [2024-11-20 13:47:27.988019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.292 [2024-11-20 13:47:28.008300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.292 [2024-11-20 13:47:28.008370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:16.292 [2024-11-20 13:47:28.008389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.278 ms 00:28:16.292 [2024-11-20 13:47:28.008401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.292 [2024-11-20 13:47:28.009076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.292 [2024-11-20 13:47:28.009118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:16.292 [2024-11-20 13:47:28.009133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:28:16.292 [2024-11-20 13:47:28.009147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.292 [2024-11-20 13:47:28.064150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.292 [2024-11-20 13:47:28.064224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:16.292 [2024-11-20 13:47:28.064242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.293 [2024-11-20 13:47:28.064253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.293 [2024-11-20 13:47:28.064394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.293 [2024-11-20 13:47:28.064407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:16.293 [2024-11-20 13:47:28.064418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.293 [2024-11-20 13:47:28.064429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.293 [2024-11-20 13:47:28.064499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.293 [2024-11-20 13:47:28.064513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:16.293 [2024-11-20 13:47:28.064523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.293 [2024-11-20 13:47:28.064534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.293 [2024-11-20 13:47:28.064555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.293 [2024-11-20 13:47:28.064574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:16.293 [2024-11-20 13:47:28.064585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.293 [2024-11-20 13:47:28.064595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.293 [2024-11-20 13:47:28.190905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.293 [2024-11-20 13:47:28.190982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:16.293 [2024-11-20 13:47:28.191000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.293 [2024-11-20 13:47:28.191012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.552 [2024-11-20 13:47:28.296342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.552 [2024-11-20 13:47:28.296422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:16.552 [2024-11-20 13:47:28.296440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.552 [2024-11-20 13:47:28.296451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.552 [2024-11-20 13:47:28.296583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.552 [2024-11-20 13:47:28.296613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:16.552 [2024-11-20 13:47:28.296625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.552 [2024-11-20 13:47:28.296636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.552 [2024-11-20 13:47:28.296668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.552 [2024-11-20 13:47:28.296679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:16.552 [2024-11-20 13:47:28.296699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.552 [2024-11-20 13:47:28.296709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.552 [2024-11-20 13:47:28.296846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.552 [2024-11-20 13:47:28.296861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:16.552 [2024-11-20 13:47:28.296889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.552 [2024-11-20 13:47:28.296900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.552 [2024-11-20 13:47:28.296943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.552 [2024-11-20 13:47:28.296956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:16.552 [2024-11-20 13:47:28.296967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.552 [2024-11-20 13:47:28.296987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.552 [2024-11-20 13:47:28.297033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.552 [2024-11-20 13:47:28.297045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:16.552 [2024-11-20 13:47:28.297057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.552 [2024-11-20 13:47:28.297069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.552 [2024-11-20 13:47:28.297116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.552 [2024-11-20 13:47:28.297129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:16.552 [2024-11-20 13:47:28.297148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.552 [2024-11-20 13:47:28.297159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.552 [2024-11-20 13:47:28.297322] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 544.900 ms, result 0 00:28:17.486 00:28:17.486 00:28:17.486 13:47:29 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:28:17.486 13:47:29 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:28:18.051 13:47:29 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:18.051 [2024-11-20 13:47:29.944389] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:28:18.051 [2024-11-20 13:47:29.944547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79041 ] 00:28:18.308 [2024-11-20 13:47:30.128502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.309 [2024-11-20 13:47:30.248414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.873 [2024-11-20 13:47:30.628901] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:18.873 [2024-11-20 13:47:30.628976] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:18.873 [2024-11-20 13:47:30.791829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.873 [2024-11-20 13:47:30.791890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:18.873 [2024-11-20 13:47:30.791908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:18.873 [2024-11-20 13:47:30.791919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.873 [2024-11-20 13:47:30.795129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.873 [2024-11-20 13:47:30.795171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:18.873 [2024-11-20 13:47:30.795185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.193 ms 00:28:18.873 [2024-11-20 13:47:30.795195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.873 [2024-11-20 13:47:30.795399] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:18.873 [2024-11-20 13:47:30.796443] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:18.873 [2024-11-20 13:47:30.796478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.873 [2024-11-20 13:47:30.796489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:18.873 [2024-11-20 13:47:30.796500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.091 ms 00:28:18.873 [2024-11-20 13:47:30.796510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.873 [2024-11-20 13:47:30.798126] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:18.873 [2024-11-20 13:47:30.817349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.873 [2024-11-20 13:47:30.817410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:18.873 [2024-11-20 13:47:30.817427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.254 ms 00:28:18.873 [2024-11-20 13:47:30.817438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.873 [2024-11-20 13:47:30.817626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.873 [2024-11-20 13:47:30.817646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:18.873 [2024-11-20 13:47:30.817658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:18.873 [2024-11-20 13:47:30.817669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.873 [2024-11-20 13:47:30.824859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.873 [2024-11-20 13:47:30.824888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:18.873 [2024-11-20 13:47:30.824901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.154 ms 00:28:18.873 [2024-11-20 13:47:30.824911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.874 [2024-11-20 13:47:30.825028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.874 [2024-11-20 13:47:30.825042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:18.874 [2024-11-20 13:47:30.825054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:28:18.874 [2024-11-20 13:47:30.825065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.874 [2024-11-20 13:47:30.825098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.874 [2024-11-20 13:47:30.825117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:18.874 [2024-11-20 13:47:30.825128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:18.874 [2024-11-20 13:47:30.825139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.874 [2024-11-20 13:47:30.825166] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:19.133 [2024-11-20 13:47:30.830166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.133 [2024-11-20 13:47:30.830210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:19.133 [2024-11-20 13:47:30.830224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.015 ms 00:28:19.133 [2024-11-20 13:47:30.830235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.133 [2024-11-20 13:47:30.830315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.133 [2024-11-20 13:47:30.830328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:19.133 [2024-11-20 13:47:30.830339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:19.133 [2024-11-20 13:47:30.830349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.133 [2024-11-20 13:47:30.830374] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:19.133 [2024-11-20 13:47:30.830406] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:19.133 [2024-11-20 13:47:30.830446] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:19.133 [2024-11-20 13:47:30.830468] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:19.133 [2024-11-20 13:47:30.830560] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:19.133 [2024-11-20 13:47:30.830576] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:19.133 [2024-11-20 13:47:30.830589] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:19.133 [2024-11-20 13:47:30.830615] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:19.133 [2024-11-20 13:47:30.830631] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:19.133 [2024-11-20 13:47:30.830644] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:19.133 [2024-11-20 13:47:30.830654] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:19.133 [2024-11-20 13:47:30.830664] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:19.133 [2024-11-20 13:47:30.830673] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:19.133 [2024-11-20 13:47:30.830685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.133 [2024-11-20 13:47:30.830695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:19.133 [2024-11-20 13:47:30.830706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:28:19.133 [2024-11-20 13:47:30.830715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.133 [2024-11-20 13:47:30.830793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.133 [2024-11-20 13:47:30.830807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:19.133 [2024-11-20 13:47:30.830818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:19.133 [2024-11-20 13:47:30.830828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.133 [2024-11-20 13:47:30.830919] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:19.133 [2024-11-20 13:47:30.830936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:19.133 [2024-11-20 13:47:30.830946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:19.133 [2024-11-20 13:47:30.830957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.133 [2024-11-20 13:47:30.830967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:19.133 [2024-11-20 13:47:30.830977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:19.133 [2024-11-20 13:47:30.830986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:19.133 [2024-11-20 13:47:30.830996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:19.133 [2024-11-20 13:47:30.831005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:19.133 [2024-11-20 13:47:30.831014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:19.133 [2024-11-20 13:47:30.831025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:19.133 [2024-11-20 13:47:30.831034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:19.133 [2024-11-20 13:47:30.831043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:19.133 [2024-11-20 13:47:30.831064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:19.133 [2024-11-20 13:47:30.831074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:19.133 [2024-11-20 13:47:30.831084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.133 [2024-11-20 13:47:30.831093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:19.133 [2024-11-20 13:47:30.831103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:19.133 [2024-11-20 13:47:30.831112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.133 [2024-11-20 13:47:30.831122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:19.133 [2024-11-20 13:47:30.831131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:19.133 [2024-11-20 13:47:30.831140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.133 [2024-11-20 13:47:30.831149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:19.133 [2024-11-20 13:47:30.831158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:19.133 [2024-11-20 13:47:30.831168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.133 [2024-11-20 13:47:30.831178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:19.133 [2024-11-20 13:47:30.831187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:19.133 [2024-11-20 13:47:30.831196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.133 [2024-11-20 13:47:30.831205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:19.133 [2024-11-20 13:47:30.831215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:19.133 [2024-11-20 13:47:30.831224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.133 [2024-11-20 13:47:30.831233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:19.133 [2024-11-20 13:47:30.831242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:19.133 [2024-11-20 13:47:30.831252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:19.133 [2024-11-20 13:47:30.831261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:19.133 [2024-11-20 13:47:30.831271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:19.133 [2024-11-20 13:47:30.831280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:19.133 [2024-11-20 13:47:30.831290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:19.133 [2024-11-20 13:47:30.831299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:19.133 [2024-11-20 13:47:30.831308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.133 [2024-11-20 13:47:30.831317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:19.133 [2024-11-20 13:47:30.831326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:19.133 [2024-11-20 13:47:30.831336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.133 [2024-11-20 13:47:30.831345] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:19.133 [2024-11-20 13:47:30.831355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:19.133 [2024-11-20 13:47:30.831365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:19.133 [2024-11-20 13:47:30.831379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.133 [2024-11-20 13:47:30.831390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:19.134 [2024-11-20 13:47:30.831399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:19.134 [2024-11-20 13:47:30.831408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:19.134 [2024-11-20 13:47:30.831418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:19.134 [2024-11-20 13:47:30.831427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:19.134 [2024-11-20 13:47:30.831437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:19.134 [2024-11-20 13:47:30.831447] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:19.134 [2024-11-20 13:47:30.831460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:19.134 [2024-11-20 13:47:30.831472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:19.134 [2024-11-20 13:47:30.831482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:19.134 [2024-11-20 13:47:30.831492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:19.134 [2024-11-20 13:47:30.831503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:19.134 [2024-11-20 13:47:30.831513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:19.134 [2024-11-20 13:47:30.831524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:19.134 [2024-11-20 13:47:30.831534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:19.134 [2024-11-20 13:47:30.831545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:19.134 [2024-11-20 13:47:30.831555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:19.134 [2024-11-20 13:47:30.831566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:19.134 [2024-11-20 13:47:30.831579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:19.134 [2024-11-20 13:47:30.831595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:19.134 [2024-11-20 13:47:30.831619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:19.134 [2024-11-20 13:47:30.831630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:19.134 [2024-11-20 13:47:30.831642] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:19.134 [2024-11-20 13:47:30.831658] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:19.134 [2024-11-20 13:47:30.831675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:19.134 [2024-11-20 13:47:30.831692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:19.134 [2024-11-20 13:47:30.831703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:19.134 [2024-11-20 13:47:30.831716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:19.134 [2024-11-20 13:47:30.831729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.134 [2024-11-20 13:47:30.831740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:19.134 [2024-11-20 13:47:30.831757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.864 ms 00:28:19.134 [2024-11-20 13:47:30.831768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.134 [2024-11-20 13:47:30.867858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.134 [2024-11-20 13:47:30.867914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:19.134 [2024-11-20 13:47:30.867930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.082 ms 00:28:19.134 [2024-11-20 13:47:30.867941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.134 [2024-11-20 13:47:30.868127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.134 [2024-11-20 13:47:30.868151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:19.134 [2024-11-20 13:47:30.868163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:28:19.134 [2024-11-20 13:47:30.868174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.134 [2024-11-20 13:47:30.923915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.134 [2024-11-20 13:47:30.923977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:19.134 [2024-11-20 13:47:30.923994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.803 ms 00:28:19.134 [2024-11-20 13:47:30.924010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.134 [2024-11-20 13:47:30.924165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.134 [2024-11-20 13:47:30.924182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:19.134 [2024-11-20 13:47:30.924195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:19.134 [2024-11-20 13:47:30.924206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.134 [2024-11-20 13:47:30.924698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.134 [2024-11-20 13:47:30.924719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:19.134 [2024-11-20 13:47:30.924731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:28:19.134 [2024-11-20 13:47:30.924749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.134 [2024-11-20 13:47:30.924880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.134 [2024-11-20 13:47:30.924898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:19.134 [2024-11-20 13:47:30.924908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:28:19.134 [2024-11-20 13:47:30.924919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.134 [2024-11-20 13:47:30.945071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.134 [2024-11-20 13:47:30.945132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:19.134 [2024-11-20 13:47:30.945150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.156 ms 00:28:19.134 [2024-11-20 13:47:30.945162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.134 [2024-11-20 13:47:30.965015] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:28:19.134 [2024-11-20 13:47:30.965068] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:19.134 [2024-11-20 13:47:30.965086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.134 [2024-11-20 13:47:30.965098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:19.134 [2024-11-20 13:47:30.965112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.777 ms 00:28:19.134 [2024-11-20 13:47:30.965122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.134 [2024-11-20 13:47:30.995359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.134 [2024-11-20 13:47:30.995465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:19.134 [2024-11-20 13:47:30.995485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.174 ms 00:28:19.134 [2024-11-20 13:47:30.995497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.134 [2024-11-20 13:47:31.015516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.134 [2024-11-20 13:47:31.015595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:19.134 [2024-11-20 13:47:31.015620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.889 ms 00:28:19.134 [2024-11-20 13:47:31.015631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.134 [2024-11-20 13:47:31.035281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.134 [2024-11-20 13:47:31.035381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:19.134 [2024-11-20 13:47:31.035400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.520 ms 00:28:19.134 [2024-11-20 13:47:31.035411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.134 [2024-11-20 13:47:31.036328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.134 [2024-11-20 13:47:31.036380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:19.134 [2024-11-20 13:47:31.036400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:28:19.134 [2024-11-20 13:47:31.036416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.392 [2024-11-20 13:47:31.127828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.392 [2024-11-20 13:47:31.127911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:19.392 [2024-11-20 13:47:31.127931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.505 ms 00:28:19.392 [2024-11-20 13:47:31.127942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.392 [2024-11-20 13:47:31.141379] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:19.392 [2024-11-20 13:47:31.158647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.392 [2024-11-20 13:47:31.158718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:19.392 [2024-11-20 13:47:31.158737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.571 ms 00:28:19.392 [2024-11-20 13:47:31.158756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.392 [2024-11-20 13:47:31.158923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.392 [2024-11-20 13:47:31.158937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:19.392 [2024-11-20 13:47:31.158949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:19.393 [2024-11-20 13:47:31.158960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.393 [2024-11-20 13:47:31.159024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.393 [2024-11-20 13:47:31.159036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:19.393 [2024-11-20 13:47:31.159047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:19.393 [2024-11-20 13:47:31.159057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.393 [2024-11-20 13:47:31.159102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.393 [2024-11-20 13:47:31.159117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:19.393 [2024-11-20 13:47:31.159128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:19.393 [2024-11-20 13:47:31.159138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.393 [2024-11-20 13:47:31.159176] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:19.393 [2024-11-20 13:47:31.159190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.393 [2024-11-20 13:47:31.159200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:19.393 [2024-11-20 13:47:31.159211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:19.393 [2024-11-20 13:47:31.159221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.393 [2024-11-20 13:47:31.197962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.393 [2024-11-20 13:47:31.198054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:19.393 [2024-11-20 13:47:31.198074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.771 ms 00:28:19.393 [2024-11-20 13:47:31.198085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.393 [2024-11-20 13:47:31.198292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.393 [2024-11-20 13:47:31.198309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:19.393 [2024-11-20 13:47:31.198321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:19.393 [2024-11-20 13:47:31.198331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.393 [2024-11-20 13:47:31.199378] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:19.393 [2024-11-20 13:47:31.204740] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 407.871 ms, result 0 00:28:19.393 [2024-11-20 13:47:31.205693] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:19.393 [2024-11-20 13:47:31.225733] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:19.651  [2024-11-20T13:47:31.608Z] Copying: 4096/4096 [kB] (average 23 MBps)[2024-11-20 13:47:31.403469] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:19.651 [2024-11-20 13:47:31.419281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.651 [2024-11-20 13:47:31.419339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:19.651 [2024-11-20 13:47:31.419360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:19.651 [2024-11-20 13:47:31.419382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.651 [2024-11-20 13:47:31.419413] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:19.651 [2024-11-20 13:47:31.424283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.651 [2024-11-20 13:47:31.424320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:19.651 [2024-11-20 13:47:31.424337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.854 ms 00:28:19.651 [2024-11-20 13:47:31.424350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.651 [2024-11-20 13:47:31.426464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.651 [2024-11-20 13:47:31.426515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:19.651 [2024-11-20 13:47:31.426531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.079 ms 00:28:19.651 [2024-11-20 13:47:31.426544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.651 [2024-11-20 13:47:31.429706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.651 [2024-11-20 13:47:31.429758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:19.651 [2024-11-20 13:47:31.429772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.142 ms 00:28:19.651 [2024-11-20 13:47:31.429787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.651 [2024-11-20 13:47:31.435510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.651 [2024-11-20 13:47:31.435556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:19.651 [2024-11-20 13:47:31.435572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.690 ms 00:28:19.651 [2024-11-20 13:47:31.435584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.651 [2024-11-20 13:47:31.473424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.651 [2024-11-20 13:47:31.473473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:19.651 [2024-11-20 13:47:31.473492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.819 ms 00:28:19.651 [2024-11-20 13:47:31.473505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.651 [2024-11-20 13:47:31.496241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.651 [2024-11-20 13:47:31.496299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:19.651 [2024-11-20 13:47:31.496322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.705 ms 00:28:19.651 [2024-11-20 13:47:31.496335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.651 [2024-11-20 13:47:31.496492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.651 [2024-11-20 13:47:31.496510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:19.651 [2024-11-20 13:47:31.496524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:28:19.651 [2024-11-20 13:47:31.496537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.651 [2024-11-20 13:47:31.534345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.651 [2024-11-20 13:47:31.534393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:19.651 [2024-11-20 13:47:31.534410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.829 ms 00:28:19.651 [2024-11-20 13:47:31.534423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.651 [2024-11-20 13:47:31.571803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.651 [2024-11-20 13:47:31.571853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:19.651 [2024-11-20 13:47:31.571870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.372 ms 00:28:19.651 [2024-11-20 13:47:31.571882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.911 [2024-11-20 13:47:31.608942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.911 [2024-11-20 13:47:31.608993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:19.911 [2024-11-20 13:47:31.609010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.010 ms 00:28:19.911 [2024-11-20 13:47:31.609022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.911 [2024-11-20 13:47:31.646629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.911 [2024-11-20 13:47:31.646680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:19.911 [2024-11-20 13:47:31.646697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.560 ms 00:28:19.911 [2024-11-20 13:47:31.646710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.911 [2024-11-20 13:47:31.646777] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:19.911 [2024-11-20 13:47:31.646801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.646817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.646832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.646849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.646862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.646875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.646888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.646902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.646914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.646927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.646941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.646954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.646968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.646981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.646994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:19.911 [2024-11-20 13:47:31.647695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.647989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.648003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.648016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.648030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.648043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.648074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.648089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.648103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.648116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.648130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:19.912 [2024-11-20 13:47:31.648150] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:19.912 [2024-11-20 13:47:31.648162] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 95612c22-11a0-46d9-b67f-3ffaf6f746c4 00:28:19.912 [2024-11-20 13:47:31.648176] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:19.912 [2024-11-20 13:47:31.648189] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:19.912 [2024-11-20 13:47:31.648201] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:19.912 [2024-11-20 13:47:31.648214] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:19.912 [2024-11-20 13:47:31.648226] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:19.912 [2024-11-20 13:47:31.648240] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:19.912 [2024-11-20 13:47:31.648254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:19.912 [2024-11-20 13:47:31.648265] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:19.912 [2024-11-20 13:47:31.648276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:19.912 [2024-11-20 13:47:31.648289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.912 [2024-11-20 13:47:31.648309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:19.912 [2024-11-20 13:47:31.648323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.516 ms 00:28:19.912 [2024-11-20 13:47:31.648336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.912 [2024-11-20 13:47:31.669756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.912 [2024-11-20 13:47:31.669799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:19.912 [2024-11-20 13:47:31.669816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.428 ms 00:28:19.912 [2024-11-20 13:47:31.669829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.912 [2024-11-20 13:47:31.670453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.912 [2024-11-20 13:47:31.670483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:19.912 [2024-11-20 13:47:31.670497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:28:19.912 [2024-11-20 13:47:31.670510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.912 [2024-11-20 13:47:31.731235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.912 [2024-11-20 13:47:31.731281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:19.912 [2024-11-20 13:47:31.731297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.912 [2024-11-20 13:47:31.731311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.912 [2024-11-20 13:47:31.731448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.912 [2024-11-20 13:47:31.731465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:19.912 [2024-11-20 13:47:31.731479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.912 [2024-11-20 13:47:31.731491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.912 [2024-11-20 13:47:31.731562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.912 [2024-11-20 13:47:31.731579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:19.912 [2024-11-20 13:47:31.731592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.912 [2024-11-20 13:47:31.731620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.912 [2024-11-20 13:47:31.731645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.912 [2024-11-20 13:47:31.731665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:19.912 [2024-11-20 13:47:31.731679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.912 [2024-11-20 13:47:31.731691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.171 [2024-11-20 13:47:31.868211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.171 [2024-11-20 13:47:31.868292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:20.171 [2024-11-20 13:47:31.868326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.172 [2024-11-20 13:47:31.868341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.172 [2024-11-20 13:47:31.978045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.172 [2024-11-20 13:47:31.978121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:20.172 [2024-11-20 13:47:31.978142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.172 [2024-11-20 13:47:31.978156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.172 [2024-11-20 13:47:31.978305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.172 [2024-11-20 13:47:31.978321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:20.172 [2024-11-20 13:47:31.978337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.172 [2024-11-20 13:47:31.978351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.172 [2024-11-20 13:47:31.978389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.172 [2024-11-20 13:47:31.978406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:20.172 [2024-11-20 13:47:31.978428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.172 [2024-11-20 13:47:31.978441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.172 [2024-11-20 13:47:31.978588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.172 [2024-11-20 13:47:31.978622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:20.172 [2024-11-20 13:47:31.978637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.172 [2024-11-20 13:47:31.978650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.172 [2024-11-20 13:47:31.978706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.172 [2024-11-20 13:47:31.978721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:20.172 [2024-11-20 13:47:31.978742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.172 [2024-11-20 13:47:31.978755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.172 [2024-11-20 13:47:31.978810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.172 [2024-11-20 13:47:31.978826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:20.172 [2024-11-20 13:47:31.978839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.172 [2024-11-20 13:47:31.978852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.172 [2024-11-20 13:47:31.978915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:20.172 [2024-11-20 13:47:31.978931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:20.172 [2024-11-20 13:47:31.978950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:20.172 [2024-11-20 13:47:31.978963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.172 [2024-11-20 13:47:31.979154] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 560.768 ms, result 0 00:28:21.544 00:28:21.544 00:28:21.544 13:47:33 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79076 00:28:21.544 13:47:33 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:28:21.544 13:47:33 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79076 00:28:21.544 13:47:33 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79076 ']' 00:28:21.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.545 13:47:33 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.545 13:47:33 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.545 13:47:33 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.545 13:47:33 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.545 13:47:33 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:28:21.545 [2024-11-20 13:47:33.284192] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:28:21.545 [2024-11-20 13:47:33.284343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79076 ] 00:28:21.545 [2024-11-20 13:47:33.469479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.802 [2024-11-20 13:47:33.588920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.735 13:47:34 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.735 13:47:34 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:28:22.735 13:47:34 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:28:22.735 [2024-11-20 13:47:34.680283] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:22.735 [2024-11-20 13:47:34.680364] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:22.994 [2024-11-20 13:47:34.864927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.994 [2024-11-20 13:47:34.864997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:22.994 [2024-11-20 13:47:34.865018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:22.994 [2024-11-20 13:47:34.865029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.995 [2024-11-20 13:47:34.868849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.995 [2024-11-20 13:47:34.868894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:22.995 [2024-11-20 13:47:34.868909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.803 ms 00:28:22.995 [2024-11-20 13:47:34.868920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.995 [2024-11-20 13:47:34.869028] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:22.995 [2024-11-20 13:47:34.870058] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:22.995 [2024-11-20 13:47:34.870096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.995 [2024-11-20 13:47:34.870107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:22.995 [2024-11-20 13:47:34.870121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.084 ms 00:28:22.995 [2024-11-20 13:47:34.870131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.995 [2024-11-20 13:47:34.871710] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:22.995 [2024-11-20 13:47:34.893020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.995 [2024-11-20 13:47:34.893093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:22.995 [2024-11-20 13:47:34.893112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.347 ms 00:28:22.995 [2024-11-20 13:47:34.893128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.995 [2024-11-20 13:47:34.893276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.995 [2024-11-20 13:47:34.893298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:22.995 [2024-11-20 13:47:34.893311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:22.995 [2024-11-20 13:47:34.893328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.995 [2024-11-20 13:47:34.900587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.995 [2024-11-20 13:47:34.900653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:22.995 [2024-11-20 13:47:34.900668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.208 ms 00:28:22.995 [2024-11-20 13:47:34.900685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.995 [2024-11-20 13:47:34.900881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.995 [2024-11-20 13:47:34.900904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:22.995 [2024-11-20 13:47:34.900917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:28:22.995 [2024-11-20 13:47:34.900942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.995 [2024-11-20 13:47:34.900981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.995 [2024-11-20 13:47:34.900998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:22.995 [2024-11-20 13:47:34.901009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:22.995 [2024-11-20 13:47:34.901025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.995 [2024-11-20 13:47:34.901054] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:22.995 [2024-11-20 13:47:34.906206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.995 [2024-11-20 13:47:34.906255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:22.995 [2024-11-20 13:47:34.906274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.164 ms 00:28:22.995 [2024-11-20 13:47:34.906285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.995 [2024-11-20 13:47:34.906395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.995 [2024-11-20 13:47:34.906409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:22.995 [2024-11-20 13:47:34.906426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:22.995 [2024-11-20 13:47:34.906441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.995 [2024-11-20 13:47:34.906479] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:22.995 [2024-11-20 13:47:34.906505] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:22.995 [2024-11-20 13:47:34.906560] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:22.995 [2024-11-20 13:47:34.906586] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:22.995 [2024-11-20 13:47:34.906724] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:22.995 [2024-11-20 13:47:34.906740] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:22.995 [2024-11-20 13:47:34.906767] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:22.995 [2024-11-20 13:47:34.906781] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:22.995 [2024-11-20 13:47:34.906799] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:22.995 [2024-11-20 13:47:34.906810] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:22.995 [2024-11-20 13:47:34.906832] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:22.995 [2024-11-20 13:47:34.906843] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:22.995 [2024-11-20 13:47:34.906863] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:22.995 [2024-11-20 13:47:34.906874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.995 [2024-11-20 13:47:34.906889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:22.995 [2024-11-20 13:47:34.906901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:28:22.995 [2024-11-20 13:47:34.906916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.995 [2024-11-20 13:47:34.907008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.995 [2024-11-20 13:47:34.907034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:22.995 [2024-11-20 13:47:34.907046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:28:22.995 [2024-11-20 13:47:34.907062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.995 [2024-11-20 13:47:34.907161] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:22.995 [2024-11-20 13:47:34.907180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:22.995 [2024-11-20 13:47:34.907191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:22.995 [2024-11-20 13:47:34.907215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.995 [2024-11-20 13:47:34.907227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:22.995 [2024-11-20 13:47:34.907241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:22.995 [2024-11-20 13:47:34.907251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:22.995 [2024-11-20 13:47:34.907271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:22.995 [2024-11-20 13:47:34.907281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:22.995 [2024-11-20 13:47:34.907295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:22.995 [2024-11-20 13:47:34.907304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:22.995 [2024-11-20 13:47:34.907318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:22.995 [2024-11-20 13:47:34.907328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:22.995 [2024-11-20 13:47:34.907343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:22.995 [2024-11-20 13:47:34.907356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:22.995 [2024-11-20 13:47:34.907374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.995 [2024-11-20 13:47:34.907384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:22.995 [2024-11-20 13:47:34.907398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:22.995 [2024-11-20 13:47:34.907408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.995 [2024-11-20 13:47:34.907423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:22.995 [2024-11-20 13:47:34.907446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:22.995 [2024-11-20 13:47:34.907461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:22.995 [2024-11-20 13:47:34.907471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:22.995 [2024-11-20 13:47:34.907489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:22.995 [2024-11-20 13:47:34.907508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:22.995 [2024-11-20 13:47:34.907522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:22.995 [2024-11-20 13:47:34.907532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:22.995 [2024-11-20 13:47:34.907545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:22.995 [2024-11-20 13:47:34.907555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:22.995 [2024-11-20 13:47:34.907569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:22.995 [2024-11-20 13:47:34.907579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:22.995 [2024-11-20 13:47:34.907593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:22.995 [2024-11-20 13:47:34.907627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:22.995 [2024-11-20 13:47:34.907645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:22.995 [2024-11-20 13:47:34.907655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:22.995 [2024-11-20 13:47:34.907669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:22.995 [2024-11-20 13:47:34.907679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:22.995 [2024-11-20 13:47:34.907693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:22.995 [2024-11-20 13:47:34.907703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:22.995 [2024-11-20 13:47:34.907722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.995 [2024-11-20 13:47:34.907732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:22.995 [2024-11-20 13:47:34.907749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:22.995 [2024-11-20 13:47:34.907759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.996 [2024-11-20 13:47:34.907773] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:22.996 [2024-11-20 13:47:34.907791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:22.996 [2024-11-20 13:47:34.907806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:22.996 [2024-11-20 13:47:34.907817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.996 [2024-11-20 13:47:34.907834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:22.996 [2024-11-20 13:47:34.907844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:22.996 [2024-11-20 13:47:34.907857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:22.996 [2024-11-20 13:47:34.907868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:22.996 [2024-11-20 13:47:34.907890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:22.996 [2024-11-20 13:47:34.907899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:22.996 [2024-11-20 13:47:34.907916] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:22.996 [2024-11-20 13:47:34.907929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:22.996 [2024-11-20 13:47:34.907950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:22.996 [2024-11-20 13:47:34.907961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:22.996 [2024-11-20 13:47:34.907977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:22.996 [2024-11-20 13:47:34.907988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:22.996 [2024-11-20 13:47:34.908007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:22.996 [2024-11-20 13:47:34.908018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:22.996 [2024-11-20 13:47:34.908032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:22.996 [2024-11-20 13:47:34.908043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:22.996 [2024-11-20 13:47:34.908059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:22.996 [2024-11-20 13:47:34.908070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:22.996 [2024-11-20 13:47:34.908084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:22.996 [2024-11-20 13:47:34.908095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:22.996 [2024-11-20 13:47:34.908116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:22.996 [2024-11-20 13:47:34.908134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:22.996 [2024-11-20 13:47:34.908150] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:22.996 [2024-11-20 13:47:34.908163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:22.996 [2024-11-20 13:47:34.908183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:22.996 [2024-11-20 13:47:34.908194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:22.996 [2024-11-20 13:47:34.908209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:22.996 [2024-11-20 13:47:34.908220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:22.996 [2024-11-20 13:47:34.908235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.996 [2024-11-20 13:47:34.908246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:22.996 [2024-11-20 13:47:34.908269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.125 ms 00:28:22.996 [2024-11-20 13:47:34.908280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.996 [2024-11-20 13:47:34.949631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.996 [2024-11-20 13:47:34.949696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:22.996 [2024-11-20 13:47:34.949718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.335 ms 00:28:22.996 [2024-11-20 13:47:34.949736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.996 [2024-11-20 13:47:34.949918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.996 [2024-11-20 13:47:34.949932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:22.996 [2024-11-20 13:47:34.949949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:22.996 [2024-11-20 13:47:34.949960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.254 [2024-11-20 13:47:34.999742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.254 [2024-11-20 13:47:34.999813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:23.254 [2024-11-20 13:47:34.999836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.825 ms 00:28:23.254 [2024-11-20 13:47:34.999847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.254 [2024-11-20 13:47:35.000009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.254 [2024-11-20 13:47:35.000023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:23.254 [2024-11-20 13:47:35.000041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:23.254 [2024-11-20 13:47:35.000051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.254 [2024-11-20 13:47:35.000501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.254 [2024-11-20 13:47:35.000522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:23.254 [2024-11-20 13:47:35.000544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:28:23.254 [2024-11-20 13:47:35.000554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.254 [2024-11-20 13:47:35.000700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.254 [2024-11-20 13:47:35.000721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:23.254 [2024-11-20 13:47:35.000737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:28:23.254 [2024-11-20 13:47:35.000748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.254 [2024-11-20 13:47:35.023105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.254 [2024-11-20 13:47:35.023167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:23.254 [2024-11-20 13:47:35.023190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.356 ms 00:28:23.254 [2024-11-20 13:47:35.023202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.254 [2024-11-20 13:47:35.050524] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:23.254 [2024-11-20 13:47:35.050588] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:23.254 [2024-11-20 13:47:35.050619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.254 [2024-11-20 13:47:35.050632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:23.254 [2024-11-20 13:47:35.050650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.296 ms 00:28:23.254 [2024-11-20 13:47:35.050661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.254 [2024-11-20 13:47:35.081681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.254 [2024-11-20 13:47:35.081767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:23.254 [2024-11-20 13:47:35.081789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.907 ms 00:28:23.254 [2024-11-20 13:47:35.081801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.254 [2024-11-20 13:47:35.101540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.254 [2024-11-20 13:47:35.101618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:23.254 [2024-11-20 13:47:35.101642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.597 ms 00:28:23.254 [2024-11-20 13:47:35.101655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.254 [2024-11-20 13:47:35.121176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.254 [2024-11-20 13:47:35.121244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:23.254 [2024-11-20 13:47:35.121265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.404 ms 00:28:23.254 [2024-11-20 13:47:35.121276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.254 [2024-11-20 13:47:35.122184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.254 [2024-11-20 13:47:35.122221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:23.254 [2024-11-20 13:47:35.122240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:28:23.254 [2024-11-20 13:47:35.122252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.513 [2024-11-20 13:47:35.213428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.513 [2024-11-20 13:47:35.213509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:23.513 [2024-11-20 13:47:35.213535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.279 ms 00:28:23.513 [2024-11-20 13:47:35.213547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.513 [2024-11-20 13:47:35.226297] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:23.513 [2024-11-20 13:47:35.243372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.513 [2024-11-20 13:47:35.243446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:23.513 [2024-11-20 13:47:35.243467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.657 ms 00:28:23.513 [2024-11-20 13:47:35.243481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.513 [2024-11-20 13:47:35.243634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.513 [2024-11-20 13:47:35.243652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:23.513 [2024-11-20 13:47:35.243665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:23.513 [2024-11-20 13:47:35.243679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.513 [2024-11-20 13:47:35.243738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.513 [2024-11-20 13:47:35.243753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:23.513 [2024-11-20 13:47:35.243763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:23.513 [2024-11-20 13:47:35.243779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.513 [2024-11-20 13:47:35.243804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.513 [2024-11-20 13:47:35.243818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:23.513 [2024-11-20 13:47:35.243828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:23.513 [2024-11-20 13:47:35.243841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.513 [2024-11-20 13:47:35.243883] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:23.513 [2024-11-20 13:47:35.243901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.513 [2024-11-20 13:47:35.243911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:23.513 [2024-11-20 13:47:35.243928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:23.513 [2024-11-20 13:47:35.243938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.513 [2024-11-20 13:47:35.280418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.513 [2024-11-20 13:47:35.280489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:23.513 [2024-11-20 13:47:35.280512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.488 ms 00:28:23.513 [2024-11-20 13:47:35.280524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.513 [2024-11-20 13:47:35.280717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.513 [2024-11-20 13:47:35.280736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:23.513 [2024-11-20 13:47:35.280753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:23.513 [2024-11-20 13:47:35.280769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.513 [2024-11-20 13:47:35.281933] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:23.513 [2024-11-20 13:47:35.286875] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 417.231 ms, result 0 00:28:23.513 [2024-11-20 13:47:35.288341] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:23.513 Some configs were skipped because the RPC state that can call them passed over. 00:28:23.513 13:47:35 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:28:23.771 [2024-11-20 13:47:35.536325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.771 [2024-11-20 13:47:35.536431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:28:23.771 [2024-11-20 13:47:35.536451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.828 ms 00:28:23.771 [2024-11-20 13:47:35.536466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.772 [2024-11-20 13:47:35.536507] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.023 ms, result 0 00:28:23.772 true 00:28:23.772 13:47:35 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:28:24.030 [2024-11-20 13:47:35.763655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.030 [2024-11-20 13:47:35.763721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:28:24.030 [2024-11-20 13:47:35.763742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.249 ms 00:28:24.030 [2024-11-20 13:47:35.763754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.030 [2024-11-20 13:47:35.763802] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.409 ms, result 0 00:28:24.030 true 00:28:24.030 13:47:35 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79076 00:28:24.030 13:47:35 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79076 ']' 00:28:24.030 13:47:35 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79076 00:28:24.030 13:47:35 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:28:24.030 13:47:35 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:24.030 13:47:35 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79076 00:28:24.030 killing process with pid 79076 00:28:24.030 13:47:35 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:24.030 13:47:35 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:24.030 13:47:35 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79076' 00:28:24.030 13:47:35 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79076 00:28:24.030 13:47:35 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79076 00:28:25.407 [2024-11-20 13:47:36.956191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.407 [2024-11-20 13:47:36.956267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:25.407 [2024-11-20 13:47:36.956285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:25.407 [2024-11-20 13:47:36.956299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.407 [2024-11-20 13:47:36.956326] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:25.407 [2024-11-20 13:47:36.960496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.407 [2024-11-20 13:47:36.960543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:25.407 [2024-11-20 13:47:36.960565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.153 ms 00:28:25.407 [2024-11-20 13:47:36.960576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.407 [2024-11-20 13:47:36.960870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.407 [2024-11-20 13:47:36.960887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:25.407 [2024-11-20 13:47:36.960902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:28:25.407 [2024-11-20 13:47:36.960912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.407 [2024-11-20 13:47:36.964317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.407 [2024-11-20 13:47:36.964358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:25.407 [2024-11-20 13:47:36.964377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.384 ms 00:28:25.407 [2024-11-20 13:47:36.964389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.407 [2024-11-20 13:47:36.970158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.407 [2024-11-20 13:47:36.970206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:25.407 [2024-11-20 13:47:36.970222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.734 ms 00:28:25.407 [2024-11-20 13:47:36.970232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.407 [2024-11-20 13:47:36.986108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.407 [2024-11-20 13:47:36.986188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:25.407 [2024-11-20 13:47:36.986214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.814 ms 00:28:25.407 [2024-11-20 13:47:36.986240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.407 [2024-11-20 13:47:36.997902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.407 [2024-11-20 13:47:36.997984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:25.407 [2024-11-20 13:47:36.998005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.473 ms 00:28:25.407 [2024-11-20 13:47:36.998017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.407 [2024-11-20 13:47:36.998210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.407 [2024-11-20 13:47:36.998226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:25.407 [2024-11-20 13:47:36.998241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:28:25.407 [2024-11-20 13:47:36.998252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.407 [2024-11-20 13:47:37.015189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.407 [2024-11-20 13:47:37.015274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:25.407 [2024-11-20 13:47:37.015309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.922 ms 00:28:25.407 [2024-11-20 13:47:37.015320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.407 [2024-11-20 13:47:37.031389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.407 [2024-11-20 13:47:37.031474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:25.407 [2024-11-20 13:47:37.031503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.989 ms 00:28:25.407 [2024-11-20 13:47:37.031513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.407 [2024-11-20 13:47:37.047540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.407 [2024-11-20 13:47:37.047659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:25.407 [2024-11-20 13:47:37.047689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.940 ms 00:28:25.407 [2024-11-20 13:47:37.047701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.407 [2024-11-20 13:47:37.064115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.407 [2024-11-20 13:47:37.064198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:25.407 [2024-11-20 13:47:37.064222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.280 ms 00:28:25.407 [2024-11-20 13:47:37.064233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.407 [2024-11-20 13:47:37.064323] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:25.407 [2024-11-20 13:47:37.064346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:25.407 [2024-11-20 13:47:37.064848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.064860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.064877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.064889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.064905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.064917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.064939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.064951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.064968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.064980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.064997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:25.408 [2024-11-20 13:47:37.065882] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:25.408 [2024-11-20 13:47:37.065922] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 95612c22-11a0-46d9-b67f-3ffaf6f746c4 00:28:25.408 [2024-11-20 13:47:37.065951] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:25.408 [2024-11-20 13:47:37.065974] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:25.408 [2024-11-20 13:47:37.065984] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:25.408 [2024-11-20 13:47:37.066001] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:25.408 [2024-11-20 13:47:37.066012] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:25.408 [2024-11-20 13:47:37.066027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:25.408 [2024-11-20 13:47:37.066038] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:25.408 [2024-11-20 13:47:37.066052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:25.408 [2024-11-20 13:47:37.066061] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:25.408 [2024-11-20 13:47:37.066078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.408 [2024-11-20 13:47:37.066089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:25.408 [2024-11-20 13:47:37.066106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.761 ms 00:28:25.408 [2024-11-20 13:47:37.066116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.408 [2024-11-20 13:47:37.086767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.408 [2024-11-20 13:47:37.086842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:25.408 [2024-11-20 13:47:37.086872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.623 ms 00:28:25.408 [2024-11-20 13:47:37.086884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.408 [2024-11-20 13:47:37.087578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.408 [2024-11-20 13:47:37.087627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:25.408 [2024-11-20 13:47:37.087647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:28:25.408 [2024-11-20 13:47:37.087665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.408 [2024-11-20 13:47:37.161890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:25.408 [2024-11-20 13:47:37.161969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:25.408 [2024-11-20 13:47:37.161993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:25.409 [2024-11-20 13:47:37.162004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.409 [2024-11-20 13:47:37.162191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:25.409 [2024-11-20 13:47:37.162207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:25.409 [2024-11-20 13:47:37.162241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:25.409 [2024-11-20 13:47:37.162258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.409 [2024-11-20 13:47:37.162332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:25.409 [2024-11-20 13:47:37.162345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:25.409 [2024-11-20 13:47:37.162367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:25.409 [2024-11-20 13:47:37.162378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.409 [2024-11-20 13:47:37.162405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:25.409 [2024-11-20 13:47:37.162417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:25.409 [2024-11-20 13:47:37.162432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:25.409 [2024-11-20 13:47:37.162444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.409 [2024-11-20 13:47:37.284709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:25.409 [2024-11-20 13:47:37.284784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:25.409 [2024-11-20 13:47:37.284804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:25.409 [2024-11-20 13:47:37.284816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.667 [2024-11-20 13:47:37.390288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:25.667 [2024-11-20 13:47:37.390367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:25.667 [2024-11-20 13:47:37.390389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:25.667 [2024-11-20 13:47:37.390406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.667 [2024-11-20 13:47:37.390554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:25.667 [2024-11-20 13:47:37.390569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:25.667 [2024-11-20 13:47:37.390590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:25.667 [2024-11-20 13:47:37.390619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.667 [2024-11-20 13:47:37.390658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:25.667 [2024-11-20 13:47:37.390670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:25.667 [2024-11-20 13:47:37.390685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:25.667 [2024-11-20 13:47:37.390695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.667 [2024-11-20 13:47:37.390839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:25.667 [2024-11-20 13:47:37.390853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:25.667 [2024-11-20 13:47:37.390869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:25.667 [2024-11-20 13:47:37.390880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.667 [2024-11-20 13:47:37.390927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:25.667 [2024-11-20 13:47:37.390939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:25.667 [2024-11-20 13:47:37.390954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:25.667 [2024-11-20 13:47:37.390965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.667 [2024-11-20 13:47:37.391019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:25.667 [2024-11-20 13:47:37.391030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:25.667 [2024-11-20 13:47:37.391052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:25.667 [2024-11-20 13:47:37.391063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.667 [2024-11-20 13:47:37.391116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:25.667 [2024-11-20 13:47:37.391128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:25.667 [2024-11-20 13:47:37.391144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:25.667 [2024-11-20 13:47:37.391155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.667 [2024-11-20 13:47:37.391311] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 435.796 ms, result 0 00:28:26.602 13:47:38 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:26.859 [2024-11-20 13:47:38.583322] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:28:26.860 [2024-11-20 13:47:38.583467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79141 ] 00:28:26.860 [2024-11-20 13:47:38.763432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.118 [2024-11-20 13:47:38.882095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.376 [2024-11-20 13:47:39.254213] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:27.376 [2024-11-20 13:47:39.254280] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:27.634 [2024-11-20 13:47:39.420159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.634 [2024-11-20 13:47:39.420227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:27.634 [2024-11-20 13:47:39.420245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:27.634 [2024-11-20 13:47:39.420257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.634 [2024-11-20 13:47:39.423690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.634 [2024-11-20 13:47:39.423736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:27.634 [2024-11-20 13:47:39.423752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.415 ms 00:28:27.634 [2024-11-20 13:47:39.423763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.634 [2024-11-20 13:47:39.423887] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:27.634 [2024-11-20 13:47:39.424881] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:27.634 [2024-11-20 13:47:39.424918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.634 [2024-11-20 13:47:39.424929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:27.634 [2024-11-20 13:47:39.424941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:28:27.634 [2024-11-20 13:47:39.424951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.635 [2024-11-20 13:47:39.426486] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:27.635 [2024-11-20 13:47:39.447284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.635 [2024-11-20 13:47:39.447352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:27.635 [2024-11-20 13:47:39.447370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.830 ms 00:28:27.635 [2024-11-20 13:47:39.447381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.635 [2024-11-20 13:47:39.447520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.635 [2024-11-20 13:47:39.447535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:27.635 [2024-11-20 13:47:39.447547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:27.635 [2024-11-20 13:47:39.447557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.635 [2024-11-20 13:47:39.454573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.635 [2024-11-20 13:47:39.454621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:27.635 [2024-11-20 13:47:39.454636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.956 ms 00:28:27.635 [2024-11-20 13:47:39.454646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.635 [2024-11-20 13:47:39.454761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.635 [2024-11-20 13:47:39.454777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:27.635 [2024-11-20 13:47:39.454788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:27.635 [2024-11-20 13:47:39.454810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.635 [2024-11-20 13:47:39.454844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.635 [2024-11-20 13:47:39.454860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:27.635 [2024-11-20 13:47:39.454871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:27.635 [2024-11-20 13:47:39.454881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.635 [2024-11-20 13:47:39.454908] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:27.635 [2024-11-20 13:47:39.460061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.635 [2024-11-20 13:47:39.460097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:27.635 [2024-11-20 13:47:39.460110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.168 ms 00:28:27.635 [2024-11-20 13:47:39.460122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.635 [2024-11-20 13:47:39.460201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.635 [2024-11-20 13:47:39.460216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:27.635 [2024-11-20 13:47:39.460228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:27.635 [2024-11-20 13:47:39.460239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.635 [2024-11-20 13:47:39.460264] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:27.635 [2024-11-20 13:47:39.460293] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:27.635 [2024-11-20 13:47:39.460332] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:27.635 [2024-11-20 13:47:39.460352] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:27.635 [2024-11-20 13:47:39.460446] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:27.635 [2024-11-20 13:47:39.460459] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:27.635 [2024-11-20 13:47:39.460473] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:27.635 [2024-11-20 13:47:39.460488] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:27.635 [2024-11-20 13:47:39.460504] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:27.635 [2024-11-20 13:47:39.460517] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:27.635 [2024-11-20 13:47:39.460528] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:27.635 [2024-11-20 13:47:39.460539] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:27.635 [2024-11-20 13:47:39.460550] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:27.635 [2024-11-20 13:47:39.460561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.635 [2024-11-20 13:47:39.460572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:27.635 [2024-11-20 13:47:39.460583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:28:27.635 [2024-11-20 13:47:39.460593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.635 [2024-11-20 13:47:39.460693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.635 [2024-11-20 13:47:39.460709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:27.635 [2024-11-20 13:47:39.460720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:28:27.635 [2024-11-20 13:47:39.460732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.635 [2024-11-20 13:47:39.460832] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:27.635 [2024-11-20 13:47:39.460845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:27.635 [2024-11-20 13:47:39.460858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:27.635 [2024-11-20 13:47:39.460869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.635 [2024-11-20 13:47:39.460880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:27.635 [2024-11-20 13:47:39.460890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:27.635 [2024-11-20 13:47:39.460900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:27.635 [2024-11-20 13:47:39.460910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:27.635 [2024-11-20 13:47:39.460920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:27.635 [2024-11-20 13:47:39.460931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:27.635 [2024-11-20 13:47:39.460941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:27.635 [2024-11-20 13:47:39.460951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:27.635 [2024-11-20 13:47:39.460961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:27.635 [2024-11-20 13:47:39.460982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:27.635 [2024-11-20 13:47:39.460993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:27.635 [2024-11-20 13:47:39.461003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.635 [2024-11-20 13:47:39.461013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:27.635 [2024-11-20 13:47:39.461023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:27.635 [2024-11-20 13:47:39.461032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.635 [2024-11-20 13:47:39.461042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:27.635 [2024-11-20 13:47:39.461051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:27.635 [2024-11-20 13:47:39.461062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.635 [2024-11-20 13:47:39.461072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:27.635 [2024-11-20 13:47:39.461082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:27.635 [2024-11-20 13:47:39.461092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.635 [2024-11-20 13:47:39.461101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:27.635 [2024-11-20 13:47:39.461111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:27.635 [2024-11-20 13:47:39.461120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.635 [2024-11-20 13:47:39.461130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:27.635 [2024-11-20 13:47:39.461140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:27.635 [2024-11-20 13:47:39.461149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:27.635 [2024-11-20 13:47:39.461158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:27.635 [2024-11-20 13:47:39.461167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:27.635 [2024-11-20 13:47:39.461177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:27.635 [2024-11-20 13:47:39.461186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:27.635 [2024-11-20 13:47:39.461196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:27.635 [2024-11-20 13:47:39.461205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:27.636 [2024-11-20 13:47:39.461215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:27.636 [2024-11-20 13:47:39.461224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:27.636 [2024-11-20 13:47:39.461233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.636 [2024-11-20 13:47:39.461255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:27.636 [2024-11-20 13:47:39.461266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:27.636 [2024-11-20 13:47:39.461274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.636 [2024-11-20 13:47:39.461284] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:27.636 [2024-11-20 13:47:39.461293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:27.636 [2024-11-20 13:47:39.461303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:27.636 [2024-11-20 13:47:39.461317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:27.636 [2024-11-20 13:47:39.461327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:27.636 [2024-11-20 13:47:39.461336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:27.636 [2024-11-20 13:47:39.461345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:27.636 [2024-11-20 13:47:39.461354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:27.636 [2024-11-20 13:47:39.461363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:27.636 [2024-11-20 13:47:39.461372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:27.636 [2024-11-20 13:47:39.461383] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:27.636 [2024-11-20 13:47:39.461396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:27.636 [2024-11-20 13:47:39.461408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:27.636 [2024-11-20 13:47:39.461418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:27.636 [2024-11-20 13:47:39.461428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:27.636 [2024-11-20 13:47:39.461438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:27.636 [2024-11-20 13:47:39.461448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:27.636 [2024-11-20 13:47:39.461458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:27.636 [2024-11-20 13:47:39.461468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:27.636 [2024-11-20 13:47:39.461478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:27.636 [2024-11-20 13:47:39.461489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:27.636 [2024-11-20 13:47:39.461499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:27.636 [2024-11-20 13:47:39.461509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:27.636 [2024-11-20 13:47:39.461519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:27.636 [2024-11-20 13:47:39.461529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:27.636 [2024-11-20 13:47:39.461540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:27.636 [2024-11-20 13:47:39.461550] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:27.636 [2024-11-20 13:47:39.461561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:27.636 [2024-11-20 13:47:39.461572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:27.636 [2024-11-20 13:47:39.461582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:27.636 [2024-11-20 13:47:39.461595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:27.636 [2024-11-20 13:47:39.461606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:27.636 [2024-11-20 13:47:39.461628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.636 [2024-11-20 13:47:39.461639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:27.636 [2024-11-20 13:47:39.461653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:28:27.636 [2024-11-20 13:47:39.461664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.636 [2024-11-20 13:47:39.501317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.636 [2024-11-20 13:47:39.501375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:27.636 [2024-11-20 13:47:39.501393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.653 ms 00:28:27.636 [2024-11-20 13:47:39.501404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.636 [2024-11-20 13:47:39.501573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.636 [2024-11-20 13:47:39.501591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:27.636 [2024-11-20 13:47:39.501622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:27.636 [2024-11-20 13:47:39.501633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.636 [2024-11-20 13:47:39.561576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.636 [2024-11-20 13:47:39.561653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:27.636 [2024-11-20 13:47:39.561671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.011 ms 00:28:27.636 [2024-11-20 13:47:39.561688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.636 [2024-11-20 13:47:39.561843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.636 [2024-11-20 13:47:39.561857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:27.636 [2024-11-20 13:47:39.561869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:27.636 [2024-11-20 13:47:39.561879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.636 [2024-11-20 13:47:39.562357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.636 [2024-11-20 13:47:39.562379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:27.636 [2024-11-20 13:47:39.562392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:28:27.636 [2024-11-20 13:47:39.562410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.636 [2024-11-20 13:47:39.562544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.636 [2024-11-20 13:47:39.562575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:27.636 [2024-11-20 13:47:39.562587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:28:27.636 [2024-11-20 13:47:39.562610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.636 [2024-11-20 13:47:39.583925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.636 [2024-11-20 13:47:39.583992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:27.636 [2024-11-20 13:47:39.584010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.319 ms 00:28:27.636 [2024-11-20 13:47:39.584021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.943 [2024-11-20 13:47:39.604176] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:27.943 [2024-11-20 13:47:39.604244] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:27.943 [2024-11-20 13:47:39.604263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.943 [2024-11-20 13:47:39.604275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:27.943 [2024-11-20 13:47:39.604291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.104 ms 00:28:27.943 [2024-11-20 13:47:39.604301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.943 [2024-11-20 13:47:39.635407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.943 [2024-11-20 13:47:39.635514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:27.943 [2024-11-20 13:47:39.635532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.007 ms 00:28:27.943 [2024-11-20 13:47:39.635543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.943 [2024-11-20 13:47:39.654810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.943 [2024-11-20 13:47:39.654884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:27.943 [2024-11-20 13:47:39.654901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.139 ms 00:28:27.943 [2024-11-20 13:47:39.654912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.943 [2024-11-20 13:47:39.673845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.943 [2024-11-20 13:47:39.673911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:27.943 [2024-11-20 13:47:39.673928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.846 ms 00:28:27.943 [2024-11-20 13:47:39.673938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.943 [2024-11-20 13:47:39.674812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.943 [2024-11-20 13:47:39.674849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:27.943 [2024-11-20 13:47:39.674863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:28:27.943 [2024-11-20 13:47:39.674874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.943 [2024-11-20 13:47:39.776534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.943 [2024-11-20 13:47:39.776684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:27.943 [2024-11-20 13:47:39.776719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.772 ms 00:28:27.943 [2024-11-20 13:47:39.776742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.943 [2024-11-20 13:47:39.799464] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:27.943 [2024-11-20 13:47:39.826568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.943 [2024-11-20 13:47:39.826649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:27.943 [2024-11-20 13:47:39.826666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.544 ms 00:28:27.943 [2024-11-20 13:47:39.826686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.943 [2024-11-20 13:47:39.826845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.943 [2024-11-20 13:47:39.826860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:27.943 [2024-11-20 13:47:39.826874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:27.943 [2024-11-20 13:47:39.826886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.943 [2024-11-20 13:47:39.826966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.943 [2024-11-20 13:47:39.826979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:27.943 [2024-11-20 13:47:39.826990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:27.943 [2024-11-20 13:47:39.827000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.943 [2024-11-20 13:47:39.827051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.943 [2024-11-20 13:47:39.827064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:27.943 [2024-11-20 13:47:39.827076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:27.943 [2024-11-20 13:47:39.827087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.943 [2024-11-20 13:47:39.827132] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:27.943 [2024-11-20 13:47:39.827146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.943 [2024-11-20 13:47:39.827157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:27.943 [2024-11-20 13:47:39.827169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:27.943 [2024-11-20 13:47:39.827179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.943 [2024-11-20 13:47:39.864396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.943 [2024-11-20 13:47:39.864446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:27.943 [2024-11-20 13:47:39.864461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.251 ms 00:28:27.943 [2024-11-20 13:47:39.864473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.943 [2024-11-20 13:47:39.864622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.943 [2024-11-20 13:47:39.864638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:27.943 [2024-11-20 13:47:39.864650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:28:27.943 [2024-11-20 13:47:39.864661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.943 [2024-11-20 13:47:39.865932] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:27.943 [2024-11-20 13:47:39.870335] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 446.123 ms, result 0 00:28:27.943 [2024-11-20 13:47:39.871176] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:28.219 [2024-11-20 13:47:39.889493] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:29.151  [2024-11-20T13:47:42.039Z] Copying: 30/256 [MB] (30 MBps) [2024-11-20T13:47:42.970Z] Copying: 58/256 [MB] (27 MBps) [2024-11-20T13:47:44.344Z] Copying: 84/256 [MB] (25 MBps) [2024-11-20T13:47:45.285Z] Copying: 109/256 [MB] (25 MBps) [2024-11-20T13:47:46.217Z] Copying: 136/256 [MB] (26 MBps) [2024-11-20T13:47:47.151Z] Copying: 162/256 [MB] (26 MBps) [2024-11-20T13:47:48.084Z] Copying: 189/256 [MB] (27 MBps) [2024-11-20T13:47:49.017Z] Copying: 215/256 [MB] (26 MBps) [2024-11-20T13:47:49.584Z] Copying: 242/256 [MB] (26 MBps) [2024-11-20T13:47:49.905Z] Copying: 256/256 [MB] (average 27 MBps)[2024-11-20 13:47:49.892212] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:38.208 [2024-11-20 13:47:49.911476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.208 [2024-11-20 13:47:49.911536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:38.208 [2024-11-20 13:47:49.911555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:38.208 [2024-11-20 13:47:49.911580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.208 [2024-11-20 13:47:49.911624] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:38.208 [2024-11-20 13:47:49.917198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.208 [2024-11-20 13:47:49.917275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:38.208 [2024-11-20 13:47:49.917303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.560 ms 00:28:38.208 [2024-11-20 13:47:49.917315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.208 [2024-11-20 13:47:49.917670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.208 [2024-11-20 13:47:49.917692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:38.208 [2024-11-20 13:47:49.917704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:28:38.208 [2024-11-20 13:47:49.917717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.208 [2024-11-20 13:47:49.921073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.208 [2024-11-20 13:47:49.921109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:38.208 [2024-11-20 13:47:49.921121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.341 ms 00:28:38.208 [2024-11-20 13:47:49.921132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.208 [2024-11-20 13:47:49.927893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.208 [2024-11-20 13:47:49.927939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:38.208 [2024-11-20 13:47:49.927960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.744 ms 00:28:38.208 [2024-11-20 13:47:49.927972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.208 [2024-11-20 13:47:49.970657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.208 [2024-11-20 13:47:49.970733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:38.208 [2024-11-20 13:47:49.970749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.650 ms 00:28:38.208 [2024-11-20 13:47:49.970762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.208 [2024-11-20 13:47:49.996304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.208 [2024-11-20 13:47:49.996379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:38.208 [2024-11-20 13:47:49.996401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.484 ms 00:28:38.208 [2024-11-20 13:47:49.996413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.208 [2024-11-20 13:47:49.996644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.208 [2024-11-20 13:47:49.996662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:38.208 [2024-11-20 13:47:49.996674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:28:38.208 [2024-11-20 13:47:49.996685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.208 [2024-11-20 13:47:50.038323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.208 [2024-11-20 13:47:50.038395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:38.208 [2024-11-20 13:47:50.038412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.665 ms 00:28:38.208 [2024-11-20 13:47:50.038425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.208 [2024-11-20 13:47:50.081032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.208 [2024-11-20 13:47:50.081098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:38.208 [2024-11-20 13:47:50.081116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.578 ms 00:28:38.208 [2024-11-20 13:47:50.081128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.208 [2024-11-20 13:47:50.121428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.208 [2024-11-20 13:47:50.121487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:38.208 [2024-11-20 13:47:50.121503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.270 ms 00:28:38.208 [2024-11-20 13:47:50.121514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.208 [2024-11-20 13:47:50.161896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.208 [2024-11-20 13:47:50.161956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:38.208 [2024-11-20 13:47:50.161973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.317 ms 00:28:38.208 [2024-11-20 13:47:50.161984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.208 [2024-11-20 13:47:50.162061] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:38.208 [2024-11-20 13:47:50.162085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:38.208 [2024-11-20 13:47:50.162100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:38.208 [2024-11-20 13:47:50.162114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:38.208 [2024-11-20 13:47:50.162127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:38.208 [2024-11-20 13:47:50.162139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:38.208 [2024-11-20 13:47:50.162151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:38.208 [2024-11-20 13:47:50.162164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:38.208 [2024-11-20 13:47:50.162183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:38.208 [2024-11-20 13:47:50.162197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:38.208 [2024-11-20 13:47:50.162226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:38.208 [2024-11-20 13:47:50.162250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:38.208 [2024-11-20 13:47:50.162262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:38.208 [2024-11-20 13:47:50.162274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:38.208 [2024-11-20 13:47:50.162286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:38.208 [2024-11-20 13:47:50.162298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:38.208 [2024-11-20 13:47:50.162309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.162993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:38.209 [2024-11-20 13:47:50.163300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:38.468 [2024-11-20 13:47:50.163313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:38.468 [2024-11-20 13:47:50.163326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:38.468 [2024-11-20 13:47:50.163338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:38.468 [2024-11-20 13:47:50.163351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:38.468 [2024-11-20 13:47:50.163372] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:38.468 [2024-11-20 13:47:50.163384] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 95612c22-11a0-46d9-b67f-3ffaf6f746c4 00:28:38.468 [2024-11-20 13:47:50.163397] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:38.468 [2024-11-20 13:47:50.163408] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:38.468 [2024-11-20 13:47:50.163419] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:38.468 [2024-11-20 13:47:50.163430] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:38.468 [2024-11-20 13:47:50.163441] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:38.468 [2024-11-20 13:47:50.163453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:38.468 [2024-11-20 13:47:50.163464] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:38.468 [2024-11-20 13:47:50.163475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:38.468 [2024-11-20 13:47:50.163485] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:38.468 [2024-11-20 13:47:50.163498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.468 [2024-11-20 13:47:50.163516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:38.468 [2024-11-20 13:47:50.163527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:28:38.468 [2024-11-20 13:47:50.163539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.468 [2024-11-20 13:47:50.185867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.468 [2024-11-20 13:47:50.185918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:38.468 [2024-11-20 13:47:50.185932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.337 ms 00:28:38.468 [2024-11-20 13:47:50.185944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.468 [2024-11-20 13:47:50.186660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.468 [2024-11-20 13:47:50.186687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:38.468 [2024-11-20 13:47:50.186700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:28:38.468 [2024-11-20 13:47:50.186713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.468 [2024-11-20 13:47:50.250422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.468 [2024-11-20 13:47:50.250497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:38.468 [2024-11-20 13:47:50.250514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.468 [2024-11-20 13:47:50.250527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.468 [2024-11-20 13:47:50.250696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.468 [2024-11-20 13:47:50.250712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:38.468 [2024-11-20 13:47:50.250725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.468 [2024-11-20 13:47:50.250738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.468 [2024-11-20 13:47:50.250799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.468 [2024-11-20 13:47:50.250814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:38.468 [2024-11-20 13:47:50.250831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.468 [2024-11-20 13:47:50.250851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.468 [2024-11-20 13:47:50.250879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.468 [2024-11-20 13:47:50.250898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:38.468 [2024-11-20 13:47:50.250909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.468 [2024-11-20 13:47:50.250921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.468 [2024-11-20 13:47:50.396303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.468 [2024-11-20 13:47:50.396380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:38.468 [2024-11-20 13:47:50.396397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.468 [2024-11-20 13:47:50.396409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.726 [2024-11-20 13:47:50.511389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.727 [2024-11-20 13:47:50.511464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:38.727 [2024-11-20 13:47:50.511480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.727 [2024-11-20 13:47:50.511493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.727 [2024-11-20 13:47:50.511630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.727 [2024-11-20 13:47:50.511645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:38.727 [2024-11-20 13:47:50.511674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.727 [2024-11-20 13:47:50.511687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.727 [2024-11-20 13:47:50.511722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.727 [2024-11-20 13:47:50.511734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:38.727 [2024-11-20 13:47:50.511753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.727 [2024-11-20 13:47:50.511765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.727 [2024-11-20 13:47:50.511906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.727 [2024-11-20 13:47:50.511922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:38.727 [2024-11-20 13:47:50.511934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.727 [2024-11-20 13:47:50.511946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.727 [2024-11-20 13:47:50.511988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.727 [2024-11-20 13:47:50.512002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:38.727 [2024-11-20 13:47:50.512013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.727 [2024-11-20 13:47:50.512030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.727 [2024-11-20 13:47:50.512081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.727 [2024-11-20 13:47:50.512094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:38.727 [2024-11-20 13:47:50.512106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.727 [2024-11-20 13:47:50.512117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.727 [2024-11-20 13:47:50.512172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.727 [2024-11-20 13:47:50.512186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:38.727 [2024-11-20 13:47:50.512202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.727 [2024-11-20 13:47:50.512214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.727 [2024-11-20 13:47:50.512390] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 601.908 ms, result 0 00:28:40.100 00:28:40.100 00:28:40.100 13:47:51 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:40.358 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:28:40.358 13:47:52 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:28:40.358 13:47:52 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:28:40.358 13:47:52 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:40.358 13:47:52 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:40.358 13:47:52 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:28:40.358 13:47:52 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:28:40.616 13:47:52 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79076 00:28:40.616 13:47:52 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79076 ']' 00:28:40.616 13:47:52 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79076 00:28:40.616 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79076) - No such process 00:28:40.616 Process with pid 79076 is not found 00:28:40.616 13:47:52 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 79076 is not found' 00:28:40.616 00:28:40.616 real 1m13.145s 00:28:40.616 user 1m42.063s 00:28:40.616 sys 0m7.584s 00:28:40.616 13:47:52 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:40.616 13:47:52 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:28:40.616 ************************************ 00:28:40.616 END TEST ftl_trim 00:28:40.616 ************************************ 00:28:40.616 13:47:52 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:28:40.616 13:47:52 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:40.616 13:47:52 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:40.616 13:47:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:40.616 ************************************ 00:28:40.616 START TEST ftl_restore 00:28:40.616 ************************************ 00:28:40.616 13:47:52 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:28:40.875 * Looking for test storage... 00:28:40.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:40.875 13:47:52 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:40.875 13:47:52 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:28:40.875 13:47:52 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:40.875 13:47:52 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:40.875 13:47:52 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:28:40.875 13:47:52 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:40.875 13:47:52 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:40.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.875 --rc genhtml_branch_coverage=1 00:28:40.875 --rc genhtml_function_coverage=1 00:28:40.875 --rc genhtml_legend=1 00:28:40.875 --rc geninfo_all_blocks=1 00:28:40.875 --rc geninfo_unexecuted_blocks=1 00:28:40.875 00:28:40.875 ' 00:28:40.875 13:47:52 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:40.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.875 --rc genhtml_branch_coverage=1 00:28:40.875 --rc genhtml_function_coverage=1 00:28:40.875 --rc genhtml_legend=1 00:28:40.875 --rc geninfo_all_blocks=1 00:28:40.875 --rc geninfo_unexecuted_blocks=1 00:28:40.875 00:28:40.875 ' 00:28:40.875 13:47:52 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:40.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.875 --rc genhtml_branch_coverage=1 00:28:40.875 --rc genhtml_function_coverage=1 00:28:40.875 --rc genhtml_legend=1 00:28:40.875 --rc geninfo_all_blocks=1 00:28:40.875 --rc geninfo_unexecuted_blocks=1 00:28:40.875 00:28:40.875 ' 00:28:40.875 13:47:52 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:40.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.875 --rc genhtml_branch_coverage=1 00:28:40.875 --rc genhtml_function_coverage=1 00:28:40.875 --rc genhtml_legend=1 00:28:40.875 --rc geninfo_all_blocks=1 00:28:40.875 --rc geninfo_unexecuted_blocks=1 00:28:40.875 00:28:40.875 ' 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.qaK3UJtNLq 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:28:40.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79351 00:28:40.875 13:47:52 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79351 00:28:40.875 13:47:52 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79351 ']' 00:28:40.875 13:47:52 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.875 13:47:52 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:40.875 13:47:52 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.875 13:47:52 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:40.875 13:47:52 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:28:41.134 [2024-11-20 13:47:52.859099] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:28:41.134 [2024-11-20 13:47:52.859449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79351 ] 00:28:41.134 [2024-11-20 13:47:53.044399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.391 [2024-11-20 13:47:53.196711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.325 13:47:54 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:42.325 13:47:54 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:28:42.325 13:47:54 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:42.325 13:47:54 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:28:42.325 13:47:54 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:42.325 13:47:54 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:28:42.325 13:47:54 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:28:42.325 13:47:54 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:42.586 13:47:54 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:42.586 13:47:54 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:28:42.586 13:47:54 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:42.586 13:47:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:42.586 13:47:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:42.586 13:47:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:42.586 13:47:54 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:42.586 13:47:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:42.848 13:47:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:42.848 { 00:28:42.848 "name": "nvme0n1", 00:28:42.848 "aliases": [ 00:28:42.848 "41fd4bcd-3459-4e56-8c16-b3cf581e896a" 00:28:42.848 ], 00:28:42.848 "product_name": "NVMe disk", 00:28:42.848 "block_size": 4096, 00:28:42.848 "num_blocks": 1310720, 00:28:42.848 "uuid": "41fd4bcd-3459-4e56-8c16-b3cf581e896a", 00:28:42.848 "numa_id": -1, 00:28:42.848 "assigned_rate_limits": { 00:28:42.848 "rw_ios_per_sec": 0, 00:28:42.848 "rw_mbytes_per_sec": 0, 00:28:42.848 "r_mbytes_per_sec": 0, 00:28:42.848 "w_mbytes_per_sec": 0 00:28:42.848 }, 00:28:42.848 "claimed": true, 00:28:42.848 "claim_type": "read_many_write_one", 00:28:42.848 "zoned": false, 00:28:42.848 "supported_io_types": { 00:28:42.848 "read": true, 00:28:42.848 "write": true, 00:28:42.848 "unmap": true, 00:28:42.848 "flush": true, 00:28:42.848 "reset": true, 00:28:42.848 "nvme_admin": true, 00:28:42.848 "nvme_io": true, 00:28:42.848 "nvme_io_md": false, 00:28:42.848 "write_zeroes": true, 00:28:42.848 "zcopy": false, 00:28:42.848 "get_zone_info": false, 00:28:42.848 "zone_management": false, 00:28:42.848 "zone_append": false, 00:28:42.848 "compare": true, 00:28:42.848 "compare_and_write": false, 00:28:42.848 "abort": true, 00:28:42.848 "seek_hole": false, 00:28:42.848 "seek_data": false, 00:28:42.848 "copy": true, 00:28:42.848 "nvme_iov_md": false 00:28:42.848 }, 00:28:42.848 "driver_specific": { 00:28:42.848 "nvme": [ 00:28:42.848 { 00:28:42.848 "pci_address": "0000:00:11.0", 00:28:42.848 "trid": { 00:28:42.848 "trtype": "PCIe", 00:28:42.848 "traddr": "0000:00:11.0" 00:28:42.848 }, 00:28:42.848 "ctrlr_data": { 00:28:42.848 "cntlid": 0, 00:28:42.848 "vendor_id": "0x1b36", 00:28:42.848 "model_number": "QEMU NVMe Ctrl", 00:28:42.848 "serial_number": "12341", 00:28:42.848 "firmware_revision": "8.0.0", 00:28:42.848 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:42.848 "oacs": { 00:28:42.848 "security": 0, 00:28:42.848 "format": 1, 00:28:42.848 "firmware": 0, 00:28:42.848 "ns_manage": 1 00:28:42.848 }, 00:28:42.848 "multi_ctrlr": false, 00:28:42.848 "ana_reporting": false 00:28:42.848 }, 00:28:42.848 "vs": { 00:28:42.848 "nvme_version": "1.4" 00:28:42.848 }, 00:28:42.848 "ns_data": { 00:28:42.848 "id": 1, 00:28:42.848 "can_share": false 00:28:42.848 } 00:28:42.848 } 00:28:42.848 ], 00:28:42.848 "mp_policy": "active_passive" 00:28:42.848 } 00:28:42.848 } 00:28:42.848 ]' 00:28:42.848 13:47:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:42.848 13:47:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:42.848 13:47:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:43.118 13:47:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:43.118 13:47:54 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:43.118 13:47:54 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:28:43.118 13:47:54 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:28:43.118 13:47:54 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:43.119 13:47:54 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:28:43.119 13:47:54 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:43.119 13:47:54 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:43.379 13:47:55 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=91fb88d4-9bc4-48f0-b56f-94039f7e469e 00:28:43.379 13:47:55 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:28:43.379 13:47:55 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 91fb88d4-9bc4-48f0-b56f-94039f7e469e 00:28:43.638 13:47:55 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:43.638 13:47:55 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=bafcdb6c-4d27-4309-b9ee-ca9fca1513db 00:28:43.638 13:47:55 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u bafcdb6c-4d27-4309-b9ee-ca9fca1513db 00:28:43.896 13:47:55 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=504505b7-91ac-4f1b-8932-5381f78e53fe 00:28:43.896 13:47:55 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:43.896 13:47:55 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 504505b7-91ac-4f1b-8932-5381f78e53fe 00:28:43.896 13:47:55 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:28:43.896 13:47:55 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:43.896 13:47:55 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=504505b7-91ac-4f1b-8932-5381f78e53fe 00:28:43.896 13:47:55 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:28:43.896 13:47:55 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 504505b7-91ac-4f1b-8932-5381f78e53fe 00:28:43.896 13:47:55 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=504505b7-91ac-4f1b-8932-5381f78e53fe 00:28:43.896 13:47:55 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:43.896 13:47:55 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:43.896 13:47:55 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:43.896 13:47:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 504505b7-91ac-4f1b-8932-5381f78e53fe 00:28:44.154 13:47:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:44.154 { 00:28:44.154 "name": "504505b7-91ac-4f1b-8932-5381f78e53fe", 00:28:44.154 "aliases": [ 00:28:44.154 "lvs/nvme0n1p0" 00:28:44.154 ], 00:28:44.154 "product_name": "Logical Volume", 00:28:44.154 "block_size": 4096, 00:28:44.154 "num_blocks": 26476544, 00:28:44.154 "uuid": "504505b7-91ac-4f1b-8932-5381f78e53fe", 00:28:44.154 "assigned_rate_limits": { 00:28:44.154 "rw_ios_per_sec": 0, 00:28:44.154 "rw_mbytes_per_sec": 0, 00:28:44.154 "r_mbytes_per_sec": 0, 00:28:44.154 "w_mbytes_per_sec": 0 00:28:44.154 }, 00:28:44.154 "claimed": false, 00:28:44.154 "zoned": false, 00:28:44.154 "supported_io_types": { 00:28:44.154 "read": true, 00:28:44.154 "write": true, 00:28:44.154 "unmap": true, 00:28:44.154 "flush": false, 00:28:44.154 "reset": true, 00:28:44.154 "nvme_admin": false, 00:28:44.154 "nvme_io": false, 00:28:44.154 "nvme_io_md": false, 00:28:44.154 "write_zeroes": true, 00:28:44.154 "zcopy": false, 00:28:44.154 "get_zone_info": false, 00:28:44.154 "zone_management": false, 00:28:44.154 "zone_append": false, 00:28:44.154 "compare": false, 00:28:44.154 "compare_and_write": false, 00:28:44.154 "abort": false, 00:28:44.154 "seek_hole": true, 00:28:44.154 "seek_data": true, 00:28:44.154 "copy": false, 00:28:44.154 "nvme_iov_md": false 00:28:44.154 }, 00:28:44.154 "driver_specific": { 00:28:44.154 "lvol": { 00:28:44.154 "lvol_store_uuid": "bafcdb6c-4d27-4309-b9ee-ca9fca1513db", 00:28:44.154 "base_bdev": "nvme0n1", 00:28:44.154 "thin_provision": true, 00:28:44.154 "num_allocated_clusters": 0, 00:28:44.154 "snapshot": false, 00:28:44.154 "clone": false, 00:28:44.154 "esnap_clone": false 00:28:44.154 } 00:28:44.154 } 00:28:44.154 } 00:28:44.154 ]' 00:28:44.154 13:47:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:44.154 13:47:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:44.154 13:47:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:44.412 13:47:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:44.413 13:47:56 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:44.413 13:47:56 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:28:44.413 13:47:56 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:28:44.413 13:47:56 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:28:44.413 13:47:56 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:44.670 13:47:56 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:44.670 13:47:56 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:44.670 13:47:56 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 504505b7-91ac-4f1b-8932-5381f78e53fe 00:28:44.670 13:47:56 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=504505b7-91ac-4f1b-8932-5381f78e53fe 00:28:44.670 13:47:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:44.670 13:47:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:44.670 13:47:56 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:44.670 13:47:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 504505b7-91ac-4f1b-8932-5381f78e53fe 00:28:44.928 13:47:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:44.928 { 00:28:44.928 "name": "504505b7-91ac-4f1b-8932-5381f78e53fe", 00:28:44.928 "aliases": [ 00:28:44.928 "lvs/nvme0n1p0" 00:28:44.928 ], 00:28:44.928 "product_name": "Logical Volume", 00:28:44.928 "block_size": 4096, 00:28:44.928 "num_blocks": 26476544, 00:28:44.928 "uuid": "504505b7-91ac-4f1b-8932-5381f78e53fe", 00:28:44.928 "assigned_rate_limits": { 00:28:44.928 "rw_ios_per_sec": 0, 00:28:44.928 "rw_mbytes_per_sec": 0, 00:28:44.928 "r_mbytes_per_sec": 0, 00:28:44.928 "w_mbytes_per_sec": 0 00:28:44.928 }, 00:28:44.928 "claimed": false, 00:28:44.928 "zoned": false, 00:28:44.928 "supported_io_types": { 00:28:44.928 "read": true, 00:28:44.928 "write": true, 00:28:44.928 "unmap": true, 00:28:44.928 "flush": false, 00:28:44.928 "reset": true, 00:28:44.928 "nvme_admin": false, 00:28:44.928 "nvme_io": false, 00:28:44.928 "nvme_io_md": false, 00:28:44.928 "write_zeroes": true, 00:28:44.928 "zcopy": false, 00:28:44.928 "get_zone_info": false, 00:28:44.928 "zone_management": false, 00:28:44.928 "zone_append": false, 00:28:44.928 "compare": false, 00:28:44.928 "compare_and_write": false, 00:28:44.928 "abort": false, 00:28:44.928 "seek_hole": true, 00:28:44.928 "seek_data": true, 00:28:44.928 "copy": false, 00:28:44.928 "nvme_iov_md": false 00:28:44.928 }, 00:28:44.928 "driver_specific": { 00:28:44.928 "lvol": { 00:28:44.928 "lvol_store_uuid": "bafcdb6c-4d27-4309-b9ee-ca9fca1513db", 00:28:44.928 "base_bdev": "nvme0n1", 00:28:44.928 "thin_provision": true, 00:28:44.928 "num_allocated_clusters": 0, 00:28:44.928 "snapshot": false, 00:28:44.928 "clone": false, 00:28:44.928 "esnap_clone": false 00:28:44.928 } 00:28:44.928 } 00:28:44.928 } 00:28:44.928 ]' 00:28:44.928 13:47:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:44.928 13:47:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:44.928 13:47:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:44.928 13:47:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:44.928 13:47:56 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:44.928 13:47:56 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:28:44.928 13:47:56 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:28:44.928 13:47:56 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:45.186 13:47:57 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:45.186 13:47:57 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 504505b7-91ac-4f1b-8932-5381f78e53fe 00:28:45.186 13:47:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=504505b7-91ac-4f1b-8932-5381f78e53fe 00:28:45.186 13:47:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:45.186 13:47:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:45.186 13:47:57 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:45.186 13:47:57 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 504505b7-91ac-4f1b-8932-5381f78e53fe 00:28:45.444 13:47:57 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:45.444 { 00:28:45.444 "name": "504505b7-91ac-4f1b-8932-5381f78e53fe", 00:28:45.444 "aliases": [ 00:28:45.444 "lvs/nvme0n1p0" 00:28:45.444 ], 00:28:45.444 "product_name": "Logical Volume", 00:28:45.444 "block_size": 4096, 00:28:45.445 "num_blocks": 26476544, 00:28:45.445 "uuid": "504505b7-91ac-4f1b-8932-5381f78e53fe", 00:28:45.445 "assigned_rate_limits": { 00:28:45.445 "rw_ios_per_sec": 0, 00:28:45.445 "rw_mbytes_per_sec": 0, 00:28:45.445 "r_mbytes_per_sec": 0, 00:28:45.445 "w_mbytes_per_sec": 0 00:28:45.445 }, 00:28:45.445 "claimed": false, 00:28:45.445 "zoned": false, 00:28:45.445 "supported_io_types": { 00:28:45.445 "read": true, 00:28:45.445 "write": true, 00:28:45.445 "unmap": true, 00:28:45.445 "flush": false, 00:28:45.445 "reset": true, 00:28:45.445 "nvme_admin": false, 00:28:45.445 "nvme_io": false, 00:28:45.445 "nvme_io_md": false, 00:28:45.445 "write_zeroes": true, 00:28:45.445 "zcopy": false, 00:28:45.445 "get_zone_info": false, 00:28:45.445 "zone_management": false, 00:28:45.445 "zone_append": false, 00:28:45.445 "compare": false, 00:28:45.445 "compare_and_write": false, 00:28:45.445 "abort": false, 00:28:45.445 "seek_hole": true, 00:28:45.445 "seek_data": true, 00:28:45.445 "copy": false, 00:28:45.445 "nvme_iov_md": false 00:28:45.445 }, 00:28:45.445 "driver_specific": { 00:28:45.445 "lvol": { 00:28:45.445 "lvol_store_uuid": "bafcdb6c-4d27-4309-b9ee-ca9fca1513db", 00:28:45.445 "base_bdev": "nvme0n1", 00:28:45.445 "thin_provision": true, 00:28:45.445 "num_allocated_clusters": 0, 00:28:45.445 "snapshot": false, 00:28:45.445 "clone": false, 00:28:45.445 "esnap_clone": false 00:28:45.445 } 00:28:45.445 } 00:28:45.445 } 00:28:45.445 ]' 00:28:45.445 13:47:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:45.445 13:47:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:45.445 13:47:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:45.703 13:47:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:45.703 13:47:57 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:45.703 13:47:57 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:28:45.704 13:47:57 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:45.704 13:47:57 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 504505b7-91ac-4f1b-8932-5381f78e53fe --l2p_dram_limit 10' 00:28:45.704 13:47:57 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:45.704 13:47:57 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:45.704 13:47:57 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:45.704 13:47:57 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:28:45.704 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:28:45.704 13:47:57 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 504505b7-91ac-4f1b-8932-5381f78e53fe --l2p_dram_limit 10 -c nvc0n1p0 00:28:45.704 [2024-11-20 13:47:57.658989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.704 [2024-11-20 13:47:57.659060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:45.704 [2024-11-20 13:47:57.659084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:45.704 [2024-11-20 13:47:57.659097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.704 [2024-11-20 13:47:57.659187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.704 [2024-11-20 13:47:57.659202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:45.704 [2024-11-20 13:47:57.659218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:28:45.704 [2024-11-20 13:47:57.659230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.704 [2024-11-20 13:47:57.659259] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:45.963 [2024-11-20 13:47:57.660355] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:45.963 [2024-11-20 13:47:57.660393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.963 [2024-11-20 13:47:57.660406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:45.963 [2024-11-20 13:47:57.660420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.137 ms 00:28:45.963 [2024-11-20 13:47:57.660432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.963 [2024-11-20 13:47:57.660576] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e41e1e69-a5d8-48be-8784-65345996624c 00:28:45.963 [2024-11-20 13:47:57.662075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.963 [2024-11-20 13:47:57.662115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:45.963 [2024-11-20 13:47:57.662130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:45.963 [2024-11-20 13:47:57.662146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.963 [2024-11-20 13:47:57.669987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.963 [2024-11-20 13:47:57.670045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:45.963 [2024-11-20 13:47:57.670060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.788 ms 00:28:45.963 [2024-11-20 13:47:57.670074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.963 [2024-11-20 13:47:57.670242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.963 [2024-11-20 13:47:57.670261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:45.963 [2024-11-20 13:47:57.670274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:28:45.963 [2024-11-20 13:47:57.670293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.963 [2024-11-20 13:47:57.670412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.963 [2024-11-20 13:47:57.670434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:45.963 [2024-11-20 13:47:57.670447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:45.963 [2024-11-20 13:47:57.670464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.963 [2024-11-20 13:47:57.670494] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:45.963 [2024-11-20 13:47:57.675796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.963 [2024-11-20 13:47:57.675844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:45.963 [2024-11-20 13:47:57.675861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.314 ms 00:28:45.963 [2024-11-20 13:47:57.675872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.963 [2024-11-20 13:47:57.675925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.963 [2024-11-20 13:47:57.675937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:45.963 [2024-11-20 13:47:57.675951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:45.963 [2024-11-20 13:47:57.675962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.963 [2024-11-20 13:47:57.676015] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:45.963 [2024-11-20 13:47:57.676161] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:45.963 [2024-11-20 13:47:57.676185] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:45.963 [2024-11-20 13:47:57.676201] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:45.963 [2024-11-20 13:47:57.676234] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:45.963 [2024-11-20 13:47:57.676247] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:45.963 [2024-11-20 13:47:57.676261] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:45.963 [2024-11-20 13:47:57.676273] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:45.963 [2024-11-20 13:47:57.676288] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:45.963 [2024-11-20 13:47:57.676298] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:45.963 [2024-11-20 13:47:57.676311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.963 [2024-11-20 13:47:57.676322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:45.963 [2024-11-20 13:47:57.676336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:28:45.963 [2024-11-20 13:47:57.676362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.963 [2024-11-20 13:47:57.676438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.963 [2024-11-20 13:47:57.676450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:45.963 [2024-11-20 13:47:57.676463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:45.963 [2024-11-20 13:47:57.676473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.963 [2024-11-20 13:47:57.676575] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:45.963 [2024-11-20 13:47:57.676589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:45.963 [2024-11-20 13:47:57.676602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:45.963 [2024-11-20 13:47:57.676632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:45.963 [2024-11-20 13:47:57.676646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:45.963 [2024-11-20 13:47:57.676655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:45.963 [2024-11-20 13:47:57.676667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:45.963 [2024-11-20 13:47:57.676677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:45.963 [2024-11-20 13:47:57.676691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:45.963 [2024-11-20 13:47:57.676700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:45.963 [2024-11-20 13:47:57.676712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:45.963 [2024-11-20 13:47:57.676724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:45.963 [2024-11-20 13:47:57.676736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:45.963 [2024-11-20 13:47:57.676745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:45.963 [2024-11-20 13:47:57.676757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:45.963 [2024-11-20 13:47:57.676766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:45.963 [2024-11-20 13:47:57.676782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:45.963 [2024-11-20 13:47:57.676791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:45.963 [2024-11-20 13:47:57.676820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:45.963 [2024-11-20 13:47:57.676830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:45.963 [2024-11-20 13:47:57.676843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:45.963 [2024-11-20 13:47:57.676852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:45.963 [2024-11-20 13:47:57.676865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:45.963 [2024-11-20 13:47:57.676875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:45.963 [2024-11-20 13:47:57.676890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:45.963 [2024-11-20 13:47:57.676900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:45.963 [2024-11-20 13:47:57.676912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:45.963 [2024-11-20 13:47:57.676922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:45.963 [2024-11-20 13:47:57.676934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:45.963 [2024-11-20 13:47:57.676944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:45.963 [2024-11-20 13:47:57.676956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:45.963 [2024-11-20 13:47:57.676967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:45.963 [2024-11-20 13:47:57.676982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:45.963 [2024-11-20 13:47:57.676993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:45.963 [2024-11-20 13:47:57.677006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:45.963 [2024-11-20 13:47:57.677015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:45.963 [2024-11-20 13:47:57.677028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:45.964 [2024-11-20 13:47:57.677038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:45.964 [2024-11-20 13:47:57.677050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:45.964 [2024-11-20 13:47:57.677060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:45.964 [2024-11-20 13:47:57.677072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:45.964 [2024-11-20 13:47:57.677082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:45.964 [2024-11-20 13:47:57.677094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:45.964 [2024-11-20 13:47:57.677103] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:45.964 [2024-11-20 13:47:57.677118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:45.964 [2024-11-20 13:47:57.677129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:45.964 [2024-11-20 13:47:57.677142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:45.964 [2024-11-20 13:47:57.677154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:45.964 [2024-11-20 13:47:57.677171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:45.964 [2024-11-20 13:47:57.677180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:45.964 [2024-11-20 13:47:57.677193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:45.964 [2024-11-20 13:47:57.677203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:45.964 [2024-11-20 13:47:57.677216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:45.964 [2024-11-20 13:47:57.677231] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:45.964 [2024-11-20 13:47:57.677248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:45.964 [2024-11-20 13:47:57.677263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:45.964 [2024-11-20 13:47:57.677278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:45.964 [2024-11-20 13:47:57.677290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:45.964 [2024-11-20 13:47:57.677303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:45.964 [2024-11-20 13:47:57.677314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:45.964 [2024-11-20 13:47:57.677328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:45.964 [2024-11-20 13:47:57.677339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:45.964 [2024-11-20 13:47:57.677352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:45.964 [2024-11-20 13:47:57.677364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:45.964 [2024-11-20 13:47:57.677381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:45.964 [2024-11-20 13:47:57.677391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:45.964 [2024-11-20 13:47:57.677407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:45.964 [2024-11-20 13:47:57.677418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:45.964 [2024-11-20 13:47:57.677431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:45.964 [2024-11-20 13:47:57.677442] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:45.964 [2024-11-20 13:47:57.677457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:45.964 [2024-11-20 13:47:57.677469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:45.964 [2024-11-20 13:47:57.677482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:45.964 [2024-11-20 13:47:57.677494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:45.964 [2024-11-20 13:47:57.677508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:45.964 [2024-11-20 13:47:57.677520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.964 [2024-11-20 13:47:57.677534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:45.964 [2024-11-20 13:47:57.677545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:28:45.964 [2024-11-20 13:47:57.677559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.964 [2024-11-20 13:47:57.677604] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:45.964 [2024-11-20 13:47:57.677633] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:49.323 [2024-11-20 13:48:00.563295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:00.563395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:49.323 [2024-11-20 13:48:00.563415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2890.372 ms 00:28:49.323 [2024-11-20 13:48:00.563429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:00.603348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:00.603410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:49.323 [2024-11-20 13:48:00.603427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.670 ms 00:28:49.323 [2024-11-20 13:48:00.603456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:00.603659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:00.603681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:49.323 [2024-11-20 13:48:00.603694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:28:49.323 [2024-11-20 13:48:00.603719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:00.651232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:00.651295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:49.323 [2024-11-20 13:48:00.651314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.518 ms 00:28:49.323 [2024-11-20 13:48:00.651329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:00.651392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:00.651413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:49.323 [2024-11-20 13:48:00.651426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:49.323 [2024-11-20 13:48:00.651440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:00.651978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:00.652008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:49.323 [2024-11-20 13:48:00.652022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:28:49.323 [2024-11-20 13:48:00.652035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:00.652149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:00.652166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:49.323 [2024-11-20 13:48:00.652182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:28:49.323 [2024-11-20 13:48:00.652198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:00.673208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:00.673277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:49.323 [2024-11-20 13:48:00.673296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.018 ms 00:28:49.323 [2024-11-20 13:48:00.673311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:00.697089] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:49.323 [2024-11-20 13:48:00.700413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:00.700449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:49.323 [2024-11-20 13:48:00.700606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.000 ms 00:28:49.323 [2024-11-20 13:48:00.700619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:00.788606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:00.788683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:49.323 [2024-11-20 13:48:00.788704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.067 ms 00:28:49.323 [2024-11-20 13:48:00.788716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:00.788924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:00.788943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:49.323 [2024-11-20 13:48:00.788962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:28:49.323 [2024-11-20 13:48:00.788973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:00.826348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:00.826402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:49.323 [2024-11-20 13:48:00.826424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.368 ms 00:28:49.323 [2024-11-20 13:48:00.826435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:00.863453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:00.863523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:49.323 [2024-11-20 13:48:00.863546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.016 ms 00:28:49.323 [2024-11-20 13:48:00.863557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:00.864299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:00.864324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:49.323 [2024-11-20 13:48:00.864340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:28:49.323 [2024-11-20 13:48:00.864355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:00.971378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:00.971447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:49.323 [2024-11-20 13:48:00.971476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.124 ms 00:28:49.323 [2024-11-20 13:48:00.971488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:01.013510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:01.013577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:49.323 [2024-11-20 13:48:01.013606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.938 ms 00:28:49.323 [2024-11-20 13:48:01.013619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:01.054590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:01.054664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:49.323 [2024-11-20 13:48:01.054688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.969 ms 00:28:49.323 [2024-11-20 13:48:01.054700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:01.095106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:01.095159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:49.323 [2024-11-20 13:48:01.095181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.397 ms 00:28:49.323 [2024-11-20 13:48:01.095193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:01.095255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:01.095270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:49.323 [2024-11-20 13:48:01.095291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:49.323 [2024-11-20 13:48:01.095302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:01.095434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.323 [2024-11-20 13:48:01.095455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:49.323 [2024-11-20 13:48:01.095474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:49.323 [2024-11-20 13:48:01.095485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.323 [2024-11-20 13:48:01.096716] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3442.819 ms, result 0 00:28:49.323 { 00:28:49.323 "name": "ftl0", 00:28:49.323 "uuid": "e41e1e69-a5d8-48be-8784-65345996624c" 00:28:49.323 } 00:28:49.323 13:48:01 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:49.323 13:48:01 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:49.582 13:48:01 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:28:49.582 13:48:01 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:49.841 [2024-11-20 13:48:01.579225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.841 [2024-11-20 13:48:01.579293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:49.841 [2024-11-20 13:48:01.579312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:49.841 [2024-11-20 13:48:01.579336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.841 [2024-11-20 13:48:01.579368] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:49.841 [2024-11-20 13:48:01.583742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.841 [2024-11-20 13:48:01.583785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:49.841 [2024-11-20 13:48:01.583803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.350 ms 00:28:49.841 [2024-11-20 13:48:01.583815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.841 [2024-11-20 13:48:01.584084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.841 [2024-11-20 13:48:01.584109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:49.841 [2024-11-20 13:48:01.584124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:28:49.841 [2024-11-20 13:48:01.584135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.841 [2024-11-20 13:48:01.586675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.841 [2024-11-20 13:48:01.586698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:49.841 [2024-11-20 13:48:01.586713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.522 ms 00:28:49.841 [2024-11-20 13:48:01.586725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.841 [2024-11-20 13:48:01.591760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.841 [2024-11-20 13:48:01.591803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:49.841 [2024-11-20 13:48:01.591822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.016 ms 00:28:49.841 [2024-11-20 13:48:01.591833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.841 [2024-11-20 13:48:01.630906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.841 [2024-11-20 13:48:01.630976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:49.841 [2024-11-20 13:48:01.630997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.038 ms 00:28:49.841 [2024-11-20 13:48:01.631008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.841 [2024-11-20 13:48:01.653583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.841 [2024-11-20 13:48:01.653663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:49.841 [2024-11-20 13:48:01.653686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.513 ms 00:28:49.841 [2024-11-20 13:48:01.653697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.841 [2024-11-20 13:48:01.653924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.841 [2024-11-20 13:48:01.653940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:49.841 [2024-11-20 13:48:01.653956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:28:49.841 [2024-11-20 13:48:01.653968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.841 [2024-11-20 13:48:01.693295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.841 [2024-11-20 13:48:01.693366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:49.841 [2024-11-20 13:48:01.693387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.352 ms 00:28:49.841 [2024-11-20 13:48:01.693398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.841 [2024-11-20 13:48:01.731934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.841 [2024-11-20 13:48:01.732004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:49.841 [2024-11-20 13:48:01.732044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.502 ms 00:28:49.841 [2024-11-20 13:48:01.732056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.841 [2024-11-20 13:48:01.771778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.841 [2024-11-20 13:48:01.771865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:49.841 [2024-11-20 13:48:01.771887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.690 ms 00:28:49.841 [2024-11-20 13:48:01.771899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.103 [2024-11-20 13:48:01.809402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.103 [2024-11-20 13:48:01.809467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:50.103 [2024-11-20 13:48:01.809489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.379 ms 00:28:50.103 [2024-11-20 13:48:01.809500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.103 [2024-11-20 13:48:01.809572] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:50.103 [2024-11-20 13:48:01.809594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.809994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.810006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.810017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.810030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.810041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.810055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.810065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.810079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.810090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.810105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.810118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.810132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.810142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:50.103 [2024-11-20 13:48:01.810156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:50.104 [2024-11-20 13:48:01.810933] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:50.104 [2024-11-20 13:48:01.810950] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e41e1e69-a5d8-48be-8784-65345996624c 00:28:50.104 [2024-11-20 13:48:01.810962] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:50.104 [2024-11-20 13:48:01.810978] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:50.104 [2024-11-20 13:48:01.810989] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:50.104 [2024-11-20 13:48:01.811007] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:50.104 [2024-11-20 13:48:01.811017] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:50.104 [2024-11-20 13:48:01.811031] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:50.104 [2024-11-20 13:48:01.811042] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:50.104 [2024-11-20 13:48:01.811053] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:50.104 [2024-11-20 13:48:01.811062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:50.104 [2024-11-20 13:48:01.811075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.104 [2024-11-20 13:48:01.811086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:50.104 [2024-11-20 13:48:01.811100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.508 ms 00:28:50.104 [2024-11-20 13:48:01.811109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.104 [2024-11-20 13:48:01.831380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.104 [2024-11-20 13:48:01.831437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:50.104 [2024-11-20 13:48:01.831456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.219 ms 00:28:50.104 [2024-11-20 13:48:01.831467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.104 [2024-11-20 13:48:01.832031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.104 [2024-11-20 13:48:01.832047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:50.104 [2024-11-20 13:48:01.832066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:28:50.104 [2024-11-20 13:48:01.832077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.104 [2024-11-20 13:48:01.897321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.104 [2024-11-20 13:48:01.897391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:50.104 [2024-11-20 13:48:01.897410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.104 [2024-11-20 13:48:01.897421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.104 [2024-11-20 13:48:01.897514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.104 [2024-11-20 13:48:01.897526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:50.104 [2024-11-20 13:48:01.897543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.104 [2024-11-20 13:48:01.897554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.104 [2024-11-20 13:48:01.897705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.104 [2024-11-20 13:48:01.897723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:50.104 [2024-11-20 13:48:01.897737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.104 [2024-11-20 13:48:01.897748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.104 [2024-11-20 13:48:01.897777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.104 [2024-11-20 13:48:01.897788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:50.104 [2024-11-20 13:48:01.897801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.104 [2024-11-20 13:48:01.897812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.104 [2024-11-20 13:48:02.026418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.104 [2024-11-20 13:48:02.026496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:50.105 [2024-11-20 13:48:02.026517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.105 [2024-11-20 13:48:02.026529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.364 [2024-11-20 13:48:02.133999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.364 [2024-11-20 13:48:02.134078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:50.364 [2024-11-20 13:48:02.134100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.364 [2024-11-20 13:48:02.134118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.364 [2024-11-20 13:48:02.134315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.364 [2024-11-20 13:48:02.134330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:50.364 [2024-11-20 13:48:02.134345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.364 [2024-11-20 13:48:02.134357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.364 [2024-11-20 13:48:02.134439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.364 [2024-11-20 13:48:02.134455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:50.364 [2024-11-20 13:48:02.134473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.364 [2024-11-20 13:48:02.134486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.364 [2024-11-20 13:48:02.134762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.364 [2024-11-20 13:48:02.134799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:50.364 [2024-11-20 13:48:02.134819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.364 [2024-11-20 13:48:02.134833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.364 [2024-11-20 13:48:02.134919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.364 [2024-11-20 13:48:02.134933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:50.365 [2024-11-20 13:48:02.134955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.365 [2024-11-20 13:48:02.134980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.365 [2024-11-20 13:48:02.135049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.365 [2024-11-20 13:48:02.135070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:50.365 [2024-11-20 13:48:02.135086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.365 [2024-11-20 13:48:02.135111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.365 [2024-11-20 13:48:02.135181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.365 [2024-11-20 13:48:02.135214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:50.365 [2024-11-20 13:48:02.135234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.365 [2024-11-20 13:48:02.135253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.365 [2024-11-20 13:48:02.135450] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 557.063 ms, result 0 00:28:50.365 true 00:28:50.365 13:48:02 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79351 00:28:50.365 13:48:02 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79351 ']' 00:28:50.365 13:48:02 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79351 00:28:50.365 13:48:02 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:28:50.365 13:48:02 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:50.365 13:48:02 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79351 00:28:50.365 13:48:02 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:50.365 13:48:02 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:50.365 killing process with pid 79351 00:28:50.365 13:48:02 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79351' 00:28:50.365 13:48:02 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79351 00:28:50.365 13:48:02 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79351 00:28:55.641 13:48:07 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:59.834 262144+0 records in 00:28:59.834 262144+0 records out 00:28:59.834 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.3854 s, 245 MB/s 00:28:59.834 13:48:11 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:01.740 13:48:13 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:02.000 [2024-11-20 13:48:13.724327] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:29:02.000 [2024-11-20 13:48:13.724479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79595 ] 00:29:02.000 [2024-11-20 13:48:13.914070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.259 [2024-11-20 13:48:14.031213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.518 [2024-11-20 13:48:14.465128] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:02.518 [2024-11-20 13:48:14.465214] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:02.777 [2024-11-20 13:48:14.634746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.777 [2024-11-20 13:48:14.634825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:02.777 [2024-11-20 13:48:14.634856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:02.777 [2024-11-20 13:48:14.634875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.777 [2024-11-20 13:48:14.634948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.777 [2024-11-20 13:48:14.634962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:02.777 [2024-11-20 13:48:14.634978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:29:02.777 [2024-11-20 13:48:14.634990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.777 [2024-11-20 13:48:14.635013] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:02.777 [2024-11-20 13:48:14.636134] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:02.777 [2024-11-20 13:48:14.636176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.777 [2024-11-20 13:48:14.636188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:02.777 [2024-11-20 13:48:14.636200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.169 ms 00:29:02.777 [2024-11-20 13:48:14.636211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.777 [2024-11-20 13:48:14.637762] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:02.777 [2024-11-20 13:48:14.658680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.777 [2024-11-20 13:48:14.658730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:02.777 [2024-11-20 13:48:14.658747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.952 ms 00:29:02.777 [2024-11-20 13:48:14.658767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.777 [2024-11-20 13:48:14.658867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.777 [2024-11-20 13:48:14.658882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:02.777 [2024-11-20 13:48:14.658894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:02.777 [2024-11-20 13:48:14.658905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.777 [2024-11-20 13:48:14.666053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.777 [2024-11-20 13:48:14.666096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:02.777 [2024-11-20 13:48:14.666111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.072 ms 00:29:02.777 [2024-11-20 13:48:14.666135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.777 [2024-11-20 13:48:14.666259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.777 [2024-11-20 13:48:14.666275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:02.777 [2024-11-20 13:48:14.666287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:29:02.777 [2024-11-20 13:48:14.666299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.777 [2024-11-20 13:48:14.666346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.777 [2024-11-20 13:48:14.666359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:02.777 [2024-11-20 13:48:14.666371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:02.777 [2024-11-20 13:48:14.666381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.777 [2024-11-20 13:48:14.666419] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:02.777 [2024-11-20 13:48:14.671710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.777 [2024-11-20 13:48:14.671751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:02.777 [2024-11-20 13:48:14.671765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.314 ms 00:29:02.777 [2024-11-20 13:48:14.671784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.777 [2024-11-20 13:48:14.671819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.777 [2024-11-20 13:48:14.671832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:02.778 [2024-11-20 13:48:14.671844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:02.778 [2024-11-20 13:48:14.671855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.778 [2024-11-20 13:48:14.671914] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:02.778 [2024-11-20 13:48:14.671944] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:02.778 [2024-11-20 13:48:14.671984] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:02.778 [2024-11-20 13:48:14.672010] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:02.778 [2024-11-20 13:48:14.672115] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:02.778 [2024-11-20 13:48:14.672131] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:02.778 [2024-11-20 13:48:14.672145] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:02.778 [2024-11-20 13:48:14.672160] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:02.778 [2024-11-20 13:48:14.672174] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:02.778 [2024-11-20 13:48:14.672186] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:02.778 [2024-11-20 13:48:14.672199] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:02.778 [2024-11-20 13:48:14.672210] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:02.778 [2024-11-20 13:48:14.672229] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:02.778 [2024-11-20 13:48:14.672241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.778 [2024-11-20 13:48:14.672252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:02.778 [2024-11-20 13:48:14.672264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:29:02.778 [2024-11-20 13:48:14.672275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.778 [2024-11-20 13:48:14.672357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.778 [2024-11-20 13:48:14.672372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:02.778 [2024-11-20 13:48:14.672383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:29:02.778 [2024-11-20 13:48:14.672394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.778 [2024-11-20 13:48:14.672507] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:02.778 [2024-11-20 13:48:14.672524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:02.778 [2024-11-20 13:48:14.672536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:02.778 [2024-11-20 13:48:14.672547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.778 [2024-11-20 13:48:14.672558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:02.778 [2024-11-20 13:48:14.672569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:02.778 [2024-11-20 13:48:14.672579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:02.778 [2024-11-20 13:48:14.672589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:02.778 [2024-11-20 13:48:14.672614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:02.778 [2024-11-20 13:48:14.672625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:02.778 [2024-11-20 13:48:14.672641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:02.778 [2024-11-20 13:48:14.672652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:02.778 [2024-11-20 13:48:14.672666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:02.778 [2024-11-20 13:48:14.672677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:02.778 [2024-11-20 13:48:14.672688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:02.778 [2024-11-20 13:48:14.672713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.778 [2024-11-20 13:48:14.672723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:02.778 [2024-11-20 13:48:14.672734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:02.778 [2024-11-20 13:48:14.672744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.778 [2024-11-20 13:48:14.672755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:02.778 [2024-11-20 13:48:14.672765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:02.778 [2024-11-20 13:48:14.672775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.778 [2024-11-20 13:48:14.672786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:02.778 [2024-11-20 13:48:14.672797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:02.778 [2024-11-20 13:48:14.672807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.778 [2024-11-20 13:48:14.672817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:02.778 [2024-11-20 13:48:14.672827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:02.778 [2024-11-20 13:48:14.672838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.778 [2024-11-20 13:48:14.672848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:02.778 [2024-11-20 13:48:14.672859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:02.778 [2024-11-20 13:48:14.672869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.778 [2024-11-20 13:48:14.672879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:02.778 [2024-11-20 13:48:14.672889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:02.778 [2024-11-20 13:48:14.672900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:02.778 [2024-11-20 13:48:14.672910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:02.778 [2024-11-20 13:48:14.672920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:02.778 [2024-11-20 13:48:14.672929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:02.778 [2024-11-20 13:48:14.672939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:02.778 [2024-11-20 13:48:14.672949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:02.778 [2024-11-20 13:48:14.672959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.778 [2024-11-20 13:48:14.672969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:02.778 [2024-11-20 13:48:14.672979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:02.778 [2024-11-20 13:48:14.672988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.778 [2024-11-20 13:48:14.672998] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:02.778 [2024-11-20 13:48:14.673010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:02.778 [2024-11-20 13:48:14.673021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:02.778 [2024-11-20 13:48:14.673031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.778 [2024-11-20 13:48:14.673042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:02.778 [2024-11-20 13:48:14.673052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:02.778 [2024-11-20 13:48:14.673063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:02.778 [2024-11-20 13:48:14.673073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:02.778 [2024-11-20 13:48:14.673083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:02.778 [2024-11-20 13:48:14.673093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:02.778 [2024-11-20 13:48:14.673105] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:02.778 [2024-11-20 13:48:14.673119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:02.778 [2024-11-20 13:48:14.673132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:02.778 [2024-11-20 13:48:14.673145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:02.778 [2024-11-20 13:48:14.673157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:02.778 [2024-11-20 13:48:14.673167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:02.778 [2024-11-20 13:48:14.673181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:02.778 [2024-11-20 13:48:14.673193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:02.778 [2024-11-20 13:48:14.673204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:02.778 [2024-11-20 13:48:14.673216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:02.778 [2024-11-20 13:48:14.673227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:02.778 [2024-11-20 13:48:14.673240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:02.778 [2024-11-20 13:48:14.673258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:02.778 [2024-11-20 13:48:14.673274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:02.778 [2024-11-20 13:48:14.673291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:02.778 [2024-11-20 13:48:14.673308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:02.778 [2024-11-20 13:48:14.673324] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:02.778 [2024-11-20 13:48:14.673354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:02.778 [2024-11-20 13:48:14.673372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:02.778 [2024-11-20 13:48:14.673390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:02.779 [2024-11-20 13:48:14.673408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:02.779 [2024-11-20 13:48:14.673425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:02.779 [2024-11-20 13:48:14.673443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.779 [2024-11-20 13:48:14.673463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:02.779 [2024-11-20 13:48:14.673480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:29:02.779 [2024-11-20 13:48:14.673496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.779 [2024-11-20 13:48:14.717353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.779 [2024-11-20 13:48:14.717422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:02.779 [2024-11-20 13:48:14.717441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.849 ms 00:29:02.779 [2024-11-20 13:48:14.717453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.779 [2024-11-20 13:48:14.717576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.779 [2024-11-20 13:48:14.717589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:02.779 [2024-11-20 13:48:14.717615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:29:02.779 [2024-11-20 13:48:14.717628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.038 [2024-11-20 13:48:14.794651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.038 [2024-11-20 13:48:14.794716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:03.038 [2024-11-20 13:48:14.794735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.048 ms 00:29:03.038 [2024-11-20 13:48:14.794754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.038 [2024-11-20 13:48:14.794835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.038 [2024-11-20 13:48:14.794850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:03.038 [2024-11-20 13:48:14.794868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:03.038 [2024-11-20 13:48:14.794879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.038 [2024-11-20 13:48:14.795425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.038 [2024-11-20 13:48:14.795452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:03.038 [2024-11-20 13:48:14.795466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:29:03.038 [2024-11-20 13:48:14.795477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.038 [2024-11-20 13:48:14.795630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.038 [2024-11-20 13:48:14.795653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:03.038 [2024-11-20 13:48:14.795666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:29:03.038 [2024-11-20 13:48:14.795684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.038 [2024-11-20 13:48:14.817221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.038 [2024-11-20 13:48:14.817278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:03.038 [2024-11-20 13:48:14.817300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.545 ms 00:29:03.038 [2024-11-20 13:48:14.817312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.038 [2024-11-20 13:48:14.838644] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:03.038 [2024-11-20 13:48:14.838716] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:03.038 [2024-11-20 13:48:14.838741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.038 [2024-11-20 13:48:14.838760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:03.038 [2024-11-20 13:48:14.838775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.299 ms 00:29:03.038 [2024-11-20 13:48:14.838787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.038 [2024-11-20 13:48:14.871945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.038 [2024-11-20 13:48:14.872035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:03.038 [2024-11-20 13:48:14.872055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.143 ms 00:29:03.038 [2024-11-20 13:48:14.872067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.038 [2024-11-20 13:48:14.892533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.038 [2024-11-20 13:48:14.892637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:03.038 [2024-11-20 13:48:14.892655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.420 ms 00:29:03.038 [2024-11-20 13:48:14.892667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.038 [2024-11-20 13:48:14.913322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.038 [2024-11-20 13:48:14.913390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:03.038 [2024-11-20 13:48:14.913409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.626 ms 00:29:03.038 [2024-11-20 13:48:14.913421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.038 [2024-11-20 13:48:14.914334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.038 [2024-11-20 13:48:14.914373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:03.038 [2024-11-20 13:48:14.914387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:29:03.038 [2024-11-20 13:48:14.914399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.297 [2024-11-20 13:48:15.009894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.297 [2024-11-20 13:48:15.009972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:03.297 [2024-11-20 13:48:15.009992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.604 ms 00:29:03.297 [2024-11-20 13:48:15.010014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.297 [2024-11-20 13:48:15.023246] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:03.297 [2024-11-20 13:48:15.026662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.297 [2024-11-20 13:48:15.026715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:03.297 [2024-11-20 13:48:15.026736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.588 ms 00:29:03.297 [2024-11-20 13:48:15.026751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.297 [2024-11-20 13:48:15.026907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.297 [2024-11-20 13:48:15.026925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:03.297 [2024-11-20 13:48:15.026942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:03.297 [2024-11-20 13:48:15.026957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.297 [2024-11-20 13:48:15.027051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.297 [2024-11-20 13:48:15.027075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:03.297 [2024-11-20 13:48:15.027090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:03.297 [2024-11-20 13:48:15.027104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.297 [2024-11-20 13:48:15.027136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.297 [2024-11-20 13:48:15.027151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:03.297 [2024-11-20 13:48:15.027166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:03.297 [2024-11-20 13:48:15.027182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.297 [2024-11-20 13:48:15.027232] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:03.297 [2024-11-20 13:48:15.027251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.297 [2024-11-20 13:48:15.027269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:03.297 [2024-11-20 13:48:15.027285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:29:03.297 [2024-11-20 13:48:15.027298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.297 [2024-11-20 13:48:15.066355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.297 [2024-11-20 13:48:15.066433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:03.297 [2024-11-20 13:48:15.066454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.086 ms 00:29:03.297 [2024-11-20 13:48:15.066469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.297 [2024-11-20 13:48:15.066612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.297 [2024-11-20 13:48:15.066628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:03.297 [2024-11-20 13:48:15.066641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:29:03.297 [2024-11-20 13:48:15.066653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.297 [2024-11-20 13:48:15.068106] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 433.527 ms, result 0 00:29:04.244  [2024-11-20T13:48:17.152Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-20T13:48:18.088Z] Copying: 58/1024 [MB] (30 MBps) [2024-11-20T13:48:19.464Z] Copying: 87/1024 [MB] (29 MBps) [2024-11-20T13:48:20.400Z] Copying: 119/1024 [MB] (31 MBps) [2024-11-20T13:48:21.401Z] Copying: 148/1024 [MB] (29 MBps) [2024-11-20T13:48:22.337Z] Copying: 178/1024 [MB] (29 MBps) [2024-11-20T13:48:23.273Z] Copying: 208/1024 [MB] (30 MBps) [2024-11-20T13:48:24.208Z] Copying: 243/1024 [MB] (34 MBps) [2024-11-20T13:48:25.142Z] Copying: 277/1024 [MB] (33 MBps) [2024-11-20T13:48:26.080Z] Copying: 309/1024 [MB] (32 MBps) [2024-11-20T13:48:27.456Z] Copying: 343/1024 [MB] (33 MBps) [2024-11-20T13:48:28.391Z] Copying: 374/1024 [MB] (30 MBps) [2024-11-20T13:48:29.327Z] Copying: 404/1024 [MB] (30 MBps) [2024-11-20T13:48:30.261Z] Copying: 436/1024 [MB] (31 MBps) [2024-11-20T13:48:31.193Z] Copying: 467/1024 [MB] (30 MBps) [2024-11-20T13:48:32.210Z] Copying: 499/1024 [MB] (32 MBps) [2024-11-20T13:48:33.149Z] Copying: 529/1024 [MB] (30 MBps) [2024-11-20T13:48:34.086Z] Copying: 559/1024 [MB] (29 MBps) [2024-11-20T13:48:35.461Z] Copying: 589/1024 [MB] (29 MBps) [2024-11-20T13:48:36.076Z] Copying: 618/1024 [MB] (29 MBps) [2024-11-20T13:48:37.452Z] Copying: 646/1024 [MB] (27 MBps) [2024-11-20T13:48:38.388Z] Copying: 673/1024 [MB] (27 MBps) [2024-11-20T13:48:39.324Z] Copying: 703/1024 [MB] (30 MBps) [2024-11-20T13:48:40.260Z] Copying: 732/1024 [MB] (28 MBps) [2024-11-20T13:48:41.197Z] Copying: 763/1024 [MB] (31 MBps) [2024-11-20T13:48:42.132Z] Copying: 796/1024 [MB] (32 MBps) [2024-11-20T13:48:43.069Z] Copying: 825/1024 [MB] (29 MBps) [2024-11-20T13:48:44.447Z] Copying: 852/1024 [MB] (27 MBps) [2024-11-20T13:48:45.462Z] Copying: 880/1024 [MB] (27 MBps) [2024-11-20T13:48:46.398Z] Copying: 907/1024 [MB] (27 MBps) [2024-11-20T13:48:47.335Z] Copying: 935/1024 [MB] (27 MBps) [2024-11-20T13:48:48.267Z] Copying: 963/1024 [MB] (27 MBps) [2024-11-20T13:48:49.201Z] Copying: 991/1024 [MB] (28 MBps) [2024-11-20T13:48:49.201Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-20 13:48:49.021901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.244 [2024-11-20 13:48:49.021989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:37.244 [2024-11-20 13:48:49.022015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:37.244 [2024-11-20 13:48:49.022032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.244 [2024-11-20 13:48:49.022066] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:37.244 [2024-11-20 13:48:49.025961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.244 [2024-11-20 13:48:49.026016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:37.244 [2024-11-20 13:48:49.026038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.875 ms 00:29:37.244 [2024-11-20 13:48:49.026064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.244 [2024-11-20 13:48:49.027697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.244 [2024-11-20 13:48:49.027746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:37.244 [2024-11-20 13:48:49.027766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.591 ms 00:29:37.244 [2024-11-20 13:48:49.027783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.244 [2024-11-20 13:48:49.048020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.244 [2024-11-20 13:48:49.048077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:37.244 [2024-11-20 13:48:49.048100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.241 ms 00:29:37.244 [2024-11-20 13:48:49.048117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.244 [2024-11-20 13:48:49.054688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.244 [2024-11-20 13:48:49.054735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:37.244 [2024-11-20 13:48:49.054753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.520 ms 00:29:37.244 [2024-11-20 13:48:49.054771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.244 [2024-11-20 13:48:49.093486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.244 [2024-11-20 13:48:49.093573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:37.244 [2024-11-20 13:48:49.093606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.681 ms 00:29:37.244 [2024-11-20 13:48:49.093623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.244 [2024-11-20 13:48:49.116474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.244 [2024-11-20 13:48:49.116564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:37.244 [2024-11-20 13:48:49.116591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.811 ms 00:29:37.244 [2024-11-20 13:48:49.116617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.244 [2024-11-20 13:48:49.116812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.244 [2024-11-20 13:48:49.116834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:37.244 [2024-11-20 13:48:49.116864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:29:37.244 [2024-11-20 13:48:49.116880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.244 [2024-11-20 13:48:49.157040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.244 [2024-11-20 13:48:49.157133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:37.244 [2024-11-20 13:48:49.157158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.196 ms 00:29:37.244 [2024-11-20 13:48:49.157175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.244 [2024-11-20 13:48:49.196862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.244 [2024-11-20 13:48:49.196955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:37.244 [2024-11-20 13:48:49.196998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.668 ms 00:29:37.244 [2024-11-20 13:48:49.197014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.583 [2024-11-20 13:48:49.235217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.583 [2024-11-20 13:48:49.235285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:37.583 [2024-11-20 13:48:49.235309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.176 ms 00:29:37.583 [2024-11-20 13:48:49.235326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.583 [2024-11-20 13:48:49.272961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.583 [2024-11-20 13:48:49.273046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:37.583 [2024-11-20 13:48:49.273071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.496 ms 00:29:37.583 [2024-11-20 13:48:49.273088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.583 [2024-11-20 13:48:49.273164] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:37.583 [2024-11-20 13:48:49.273190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:37.583 [2024-11-20 13:48:49.273581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.273698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.273730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.273748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.273768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.273786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.273804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.273822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.273840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.273857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.273874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.273892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.273909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.273926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.273944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.273962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.273980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.273997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:37.584 [2024-11-20 13:48:49.274772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:37.585 [2024-11-20 13:48:49.274790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:37.585 [2024-11-20 13:48:49.274808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:37.585 [2024-11-20 13:48:49.274826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:37.585 [2024-11-20 13:48:49.274844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:37.585 [2024-11-20 13:48:49.274861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:37.585 [2024-11-20 13:48:49.274878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:37.585 [2024-11-20 13:48:49.274895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:37.585 [2024-11-20 13:48:49.274913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:37.585 [2024-11-20 13:48:49.274930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:37.585 [2024-11-20 13:48:49.274947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:37.585 [2024-11-20 13:48:49.274965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:37.585 [2024-11-20 13:48:49.274982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:37.585 [2024-11-20 13:48:49.275000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:37.585 [2024-11-20 13:48:49.275018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:37.585 [2024-11-20 13:48:49.275035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:37.585 [2024-11-20 13:48:49.275052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:37.585 [2024-11-20 13:48:49.275070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:37.585 [2024-11-20 13:48:49.275087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:37.585 [2024-11-20 13:48:49.275114] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:37.585 [2024-11-20 13:48:49.275137] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e41e1e69-a5d8-48be-8784-65345996624c 00:29:37.585 [2024-11-20 13:48:49.275160] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:37.585 [2024-11-20 13:48:49.275176] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:37.585 [2024-11-20 13:48:49.275192] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:37.585 [2024-11-20 13:48:49.275209] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:37.585 [2024-11-20 13:48:49.275227] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:37.585 [2024-11-20 13:48:49.275244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:37.585 [2024-11-20 13:48:49.275261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:37.585 [2024-11-20 13:48:49.275291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:37.585 [2024-11-20 13:48:49.275306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:37.585 [2024-11-20 13:48:49.275325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.585 [2024-11-20 13:48:49.275341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:37.585 [2024-11-20 13:48:49.275359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.166 ms 00:29:37.585 [2024-11-20 13:48:49.275376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.585 [2024-11-20 13:48:49.295807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.585 [2024-11-20 13:48:49.295879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:37.585 [2024-11-20 13:48:49.295902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.359 ms 00:29:37.585 [2024-11-20 13:48:49.295918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.585 [2024-11-20 13:48:49.296447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:37.585 [2024-11-20 13:48:49.296472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:37.585 [2024-11-20 13:48:49.296490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:29:37.585 [2024-11-20 13:48:49.296506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.585 [2024-11-20 13:48:49.348381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:37.585 [2024-11-20 13:48:49.348467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:37.585 [2024-11-20 13:48:49.348490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:37.585 [2024-11-20 13:48:49.348507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.585 [2024-11-20 13:48:49.348629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:37.585 [2024-11-20 13:48:49.348647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:37.585 [2024-11-20 13:48:49.348664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:37.585 [2024-11-20 13:48:49.348681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.585 [2024-11-20 13:48:49.348798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:37.585 [2024-11-20 13:48:49.348818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:37.585 [2024-11-20 13:48:49.348835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:37.585 [2024-11-20 13:48:49.348851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.585 [2024-11-20 13:48:49.348878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:37.585 [2024-11-20 13:48:49.348895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:37.585 [2024-11-20 13:48:49.348912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:37.585 [2024-11-20 13:48:49.348927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.585 [2024-11-20 13:48:49.475898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:37.585 [2024-11-20 13:48:49.475990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:37.585 [2024-11-20 13:48:49.476014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:37.585 [2024-11-20 13:48:49.476031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.843 [2024-11-20 13:48:49.580477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:37.843 [2024-11-20 13:48:49.580562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:37.843 [2024-11-20 13:48:49.580585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:37.843 [2024-11-20 13:48:49.580619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.843 [2024-11-20 13:48:49.580769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:37.843 [2024-11-20 13:48:49.580788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:37.843 [2024-11-20 13:48:49.580805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:37.843 [2024-11-20 13:48:49.580821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.843 [2024-11-20 13:48:49.580880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:37.843 [2024-11-20 13:48:49.580898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:37.843 [2024-11-20 13:48:49.580914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:37.843 [2024-11-20 13:48:49.580930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.843 [2024-11-20 13:48:49.581063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:37.843 [2024-11-20 13:48:49.581087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:37.843 [2024-11-20 13:48:49.581103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:37.843 [2024-11-20 13:48:49.581120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.843 [2024-11-20 13:48:49.581168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:37.843 [2024-11-20 13:48:49.581186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:37.843 [2024-11-20 13:48:49.581204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:37.843 [2024-11-20 13:48:49.581219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.843 [2024-11-20 13:48:49.581271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:37.843 [2024-11-20 13:48:49.581293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:37.843 [2024-11-20 13:48:49.581309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:37.843 [2024-11-20 13:48:49.581325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.843 [2024-11-20 13:48:49.581384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:37.843 [2024-11-20 13:48:49.581402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:37.843 [2024-11-20 13:48:49.581418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:37.843 [2024-11-20 13:48:49.581434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:37.843 [2024-11-20 13:48:49.581591] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 560.556 ms, result 0 00:29:39.215 00:29:39.215 00:29:39.215 13:48:51 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:39.474 [2024-11-20 13:48:51.191021] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:29:39.474 [2024-11-20 13:48:51.191165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79960 ] 00:29:39.474 [2024-11-20 13:48:51.362359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.733 [2024-11-20 13:48:51.484368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.993 [2024-11-20 13:48:51.853141] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:39.993 [2024-11-20 13:48:51.853221] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:40.253 [2024-11-20 13:48:52.017459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.253 [2024-11-20 13:48:52.017534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:40.253 [2024-11-20 13:48:52.017557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:40.253 [2024-11-20 13:48:52.017569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.253 [2024-11-20 13:48:52.017652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.253 [2024-11-20 13:48:52.017666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:40.253 [2024-11-20 13:48:52.017680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:29:40.253 [2024-11-20 13:48:52.017690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.253 [2024-11-20 13:48:52.017714] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:40.253 [2024-11-20 13:48:52.018702] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:40.253 [2024-11-20 13:48:52.018734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.253 [2024-11-20 13:48:52.018745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:40.253 [2024-11-20 13:48:52.018757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:29:40.253 [2024-11-20 13:48:52.018768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.253 [2024-11-20 13:48:52.020266] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:40.253 [2024-11-20 13:48:52.040177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.254 [2024-11-20 13:48:52.040245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:40.254 [2024-11-20 13:48:52.040263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.941 ms 00:29:40.254 [2024-11-20 13:48:52.040274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.254 [2024-11-20 13:48:52.040384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.254 [2024-11-20 13:48:52.040398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:40.254 [2024-11-20 13:48:52.040410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:40.254 [2024-11-20 13:48:52.040420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.254 [2024-11-20 13:48:52.048013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.254 [2024-11-20 13:48:52.048061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:40.254 [2024-11-20 13:48:52.048075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.508 ms 00:29:40.254 [2024-11-20 13:48:52.048090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.254 [2024-11-20 13:48:52.048181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.254 [2024-11-20 13:48:52.048196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:40.254 [2024-11-20 13:48:52.048207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:40.254 [2024-11-20 13:48:52.048218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.254 [2024-11-20 13:48:52.048269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.254 [2024-11-20 13:48:52.048282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:40.254 [2024-11-20 13:48:52.048292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:40.254 [2024-11-20 13:48:52.048303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.254 [2024-11-20 13:48:52.048336] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:40.254 [2024-11-20 13:48:52.053245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.254 [2024-11-20 13:48:52.053287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:40.254 [2024-11-20 13:48:52.053300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.929 ms 00:29:40.254 [2024-11-20 13:48:52.053315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.254 [2024-11-20 13:48:52.053354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.254 [2024-11-20 13:48:52.053365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:40.254 [2024-11-20 13:48:52.053376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:40.254 [2024-11-20 13:48:52.053385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.254 [2024-11-20 13:48:52.053452] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:40.254 [2024-11-20 13:48:52.053479] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:40.254 [2024-11-20 13:48:52.053517] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:40.254 [2024-11-20 13:48:52.053539] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:40.254 [2024-11-20 13:48:52.053643] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:40.254 [2024-11-20 13:48:52.053658] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:40.254 [2024-11-20 13:48:52.053673] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:40.254 [2024-11-20 13:48:52.053687] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:40.254 [2024-11-20 13:48:52.053699] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:40.254 [2024-11-20 13:48:52.053711] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:40.254 [2024-11-20 13:48:52.053720] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:40.254 [2024-11-20 13:48:52.053730] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:40.254 [2024-11-20 13:48:52.053745] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:40.254 [2024-11-20 13:48:52.053756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.254 [2024-11-20 13:48:52.053766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:40.254 [2024-11-20 13:48:52.053777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:29:40.254 [2024-11-20 13:48:52.053787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.254 [2024-11-20 13:48:52.053863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.254 [2024-11-20 13:48:52.053874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:40.254 [2024-11-20 13:48:52.053885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:40.254 [2024-11-20 13:48:52.053894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.254 [2024-11-20 13:48:52.053996] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:40.254 [2024-11-20 13:48:52.054011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:40.254 [2024-11-20 13:48:52.054023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:40.254 [2024-11-20 13:48:52.054033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:40.254 [2024-11-20 13:48:52.054044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:40.254 [2024-11-20 13:48:52.054053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:40.254 [2024-11-20 13:48:52.054062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:40.254 [2024-11-20 13:48:52.054072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:40.254 [2024-11-20 13:48:52.054082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:40.254 [2024-11-20 13:48:52.054091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:40.254 [2024-11-20 13:48:52.054102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:40.254 [2024-11-20 13:48:52.054112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:40.254 [2024-11-20 13:48:52.054121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:40.254 [2024-11-20 13:48:52.054131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:40.254 [2024-11-20 13:48:52.054140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:40.254 [2024-11-20 13:48:52.054161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:40.254 [2024-11-20 13:48:52.054170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:40.254 [2024-11-20 13:48:52.054190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:40.254 [2024-11-20 13:48:52.054200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:40.254 [2024-11-20 13:48:52.054210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:40.254 [2024-11-20 13:48:52.054220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:40.254 [2024-11-20 13:48:52.054229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:40.254 [2024-11-20 13:48:52.054238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:40.254 [2024-11-20 13:48:52.054248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:40.254 [2024-11-20 13:48:52.054258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:40.254 [2024-11-20 13:48:52.054267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:40.254 [2024-11-20 13:48:52.054276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:40.254 [2024-11-20 13:48:52.054286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:40.254 [2024-11-20 13:48:52.054294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:40.254 [2024-11-20 13:48:52.054304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:40.254 [2024-11-20 13:48:52.054313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:40.254 [2024-11-20 13:48:52.054322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:40.254 [2024-11-20 13:48:52.054331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:40.254 [2024-11-20 13:48:52.054340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:40.254 [2024-11-20 13:48:52.054349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:40.254 [2024-11-20 13:48:52.054358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:40.254 [2024-11-20 13:48:52.054367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:40.254 [2024-11-20 13:48:52.054376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:40.254 [2024-11-20 13:48:52.054386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:40.254 [2024-11-20 13:48:52.054395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:40.254 [2024-11-20 13:48:52.054403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:40.254 [2024-11-20 13:48:52.054412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:40.254 [2024-11-20 13:48:52.054422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:40.254 [2024-11-20 13:48:52.054432] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:40.254 [2024-11-20 13:48:52.054442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:40.254 [2024-11-20 13:48:52.054452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:40.254 [2024-11-20 13:48:52.054462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:40.254 [2024-11-20 13:48:52.054472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:40.254 [2024-11-20 13:48:52.054481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:40.254 [2024-11-20 13:48:52.054491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:40.254 [2024-11-20 13:48:52.054500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:40.254 [2024-11-20 13:48:52.054509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:40.254 [2024-11-20 13:48:52.054518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:40.254 [2024-11-20 13:48:52.054530] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:40.254 [2024-11-20 13:48:52.054543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:40.255 [2024-11-20 13:48:52.054554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:40.255 [2024-11-20 13:48:52.054565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:40.255 [2024-11-20 13:48:52.054575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:40.255 [2024-11-20 13:48:52.054585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:40.255 [2024-11-20 13:48:52.054596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:40.255 [2024-11-20 13:48:52.054617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:40.255 [2024-11-20 13:48:52.054627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:40.255 [2024-11-20 13:48:52.054637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:40.255 [2024-11-20 13:48:52.054647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:40.255 [2024-11-20 13:48:52.054658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:40.255 [2024-11-20 13:48:52.054668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:40.255 [2024-11-20 13:48:52.054678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:40.255 [2024-11-20 13:48:52.054689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:40.255 [2024-11-20 13:48:52.054699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:40.255 [2024-11-20 13:48:52.054710] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:40.255 [2024-11-20 13:48:52.054724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:40.255 [2024-11-20 13:48:52.054735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:40.255 [2024-11-20 13:48:52.054746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:40.255 [2024-11-20 13:48:52.054756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:40.255 [2024-11-20 13:48:52.054769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:40.255 [2024-11-20 13:48:52.054780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.255 [2024-11-20 13:48:52.054793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:40.255 [2024-11-20 13:48:52.054803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:29:40.255 [2024-11-20 13:48:52.054814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.255 [2024-11-20 13:48:52.094641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.255 [2024-11-20 13:48:52.094702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:40.255 [2024-11-20 13:48:52.094719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.826 ms 00:29:40.255 [2024-11-20 13:48:52.094731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.255 [2024-11-20 13:48:52.094843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.255 [2024-11-20 13:48:52.094855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:40.255 [2024-11-20 13:48:52.094866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:40.255 [2024-11-20 13:48:52.094876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.255 [2024-11-20 13:48:52.152992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.255 [2024-11-20 13:48:52.153054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:40.255 [2024-11-20 13:48:52.153070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.123 ms 00:29:40.255 [2024-11-20 13:48:52.153081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.255 [2024-11-20 13:48:52.153144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.255 [2024-11-20 13:48:52.153155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:40.255 [2024-11-20 13:48:52.153170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:40.255 [2024-11-20 13:48:52.153181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.255 [2024-11-20 13:48:52.153710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.255 [2024-11-20 13:48:52.153733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:40.255 [2024-11-20 13:48:52.153746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:29:40.255 [2024-11-20 13:48:52.153756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.255 [2024-11-20 13:48:52.153883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.255 [2024-11-20 13:48:52.153898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:40.255 [2024-11-20 13:48:52.153910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:29:40.255 [2024-11-20 13:48:52.153926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.255 [2024-11-20 13:48:52.173909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.255 [2024-11-20 13:48:52.173956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:40.255 [2024-11-20 13:48:52.173975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.992 ms 00:29:40.255 [2024-11-20 13:48:52.173986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.255 [2024-11-20 13:48:52.193285] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:40.255 [2024-11-20 13:48:52.193329] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:40.255 [2024-11-20 13:48:52.193345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.255 [2024-11-20 13:48:52.193357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:40.255 [2024-11-20 13:48:52.193369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.256 ms 00:29:40.255 [2024-11-20 13:48:52.193380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.515 [2024-11-20 13:48:52.224751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.515 [2024-11-20 13:48:52.224810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:40.515 [2024-11-20 13:48:52.224826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.376 ms 00:29:40.515 [2024-11-20 13:48:52.224837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.515 [2024-11-20 13:48:52.244914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.515 [2024-11-20 13:48:52.244981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:40.515 [2024-11-20 13:48:52.245000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.049 ms 00:29:40.515 [2024-11-20 13:48:52.245010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.515 [2024-11-20 13:48:52.262910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.515 [2024-11-20 13:48:52.262953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:40.515 [2024-11-20 13:48:52.262968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.874 ms 00:29:40.515 [2024-11-20 13:48:52.262977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.515 [2024-11-20 13:48:52.263761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.515 [2024-11-20 13:48:52.263795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:40.515 [2024-11-20 13:48:52.263808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:29:40.515 [2024-11-20 13:48:52.263823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.515 [2024-11-20 13:48:52.352111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.515 [2024-11-20 13:48:52.352188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:40.515 [2024-11-20 13:48:52.352214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.405 ms 00:29:40.515 [2024-11-20 13:48:52.352226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.515 [2024-11-20 13:48:52.364791] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:40.515 [2024-11-20 13:48:52.368135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.515 [2024-11-20 13:48:52.368179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:40.515 [2024-11-20 13:48:52.368196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.828 ms 00:29:40.515 [2024-11-20 13:48:52.368208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.515 [2024-11-20 13:48:52.368341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.515 [2024-11-20 13:48:52.368355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:40.515 [2024-11-20 13:48:52.368367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:40.515 [2024-11-20 13:48:52.368382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.515 [2024-11-20 13:48:52.368481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.515 [2024-11-20 13:48:52.368496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:40.516 [2024-11-20 13:48:52.368507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:40.516 [2024-11-20 13:48:52.368517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.516 [2024-11-20 13:48:52.368544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.516 [2024-11-20 13:48:52.368556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:40.516 [2024-11-20 13:48:52.368567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:40.516 [2024-11-20 13:48:52.368577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.516 [2024-11-20 13:48:52.368628] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:40.516 [2024-11-20 13:48:52.368640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.516 [2024-11-20 13:48:52.368651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:40.516 [2024-11-20 13:48:52.368662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:40.516 [2024-11-20 13:48:52.368673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.516 [2024-11-20 13:48:52.406605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.516 [2024-11-20 13:48:52.406681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:40.516 [2024-11-20 13:48:52.406699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.961 ms 00:29:40.516 [2024-11-20 13:48:52.406720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.516 [2024-11-20 13:48:52.406838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:40.516 [2024-11-20 13:48:52.406851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:40.516 [2024-11-20 13:48:52.406863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:29:40.516 [2024-11-20 13:48:52.406873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:40.516 [2024-11-20 13:48:52.408143] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 390.803 ms, result 0 00:29:41.894  [2024-11-20T13:48:54.786Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-20T13:48:55.739Z] Copying: 55/1024 [MB] (27 MBps) [2024-11-20T13:48:56.676Z] Copying: 83/1024 [MB] (28 MBps) [2024-11-20T13:48:58.050Z] Copying: 110/1024 [MB] (27 MBps) [2024-11-20T13:48:58.986Z] Copying: 139/1024 [MB] (28 MBps) [2024-11-20T13:48:59.921Z] Copying: 168/1024 [MB] (28 MBps) [2024-11-20T13:49:00.857Z] Copying: 198/1024 [MB] (30 MBps) [2024-11-20T13:49:01.791Z] Copying: 231/1024 [MB] (32 MBps) [2024-11-20T13:49:02.726Z] Copying: 261/1024 [MB] (29 MBps) [2024-11-20T13:49:03.663Z] Copying: 295/1024 [MB] (34 MBps) [2024-11-20T13:49:05.040Z] Copying: 325/1024 [MB] (30 MBps) [2024-11-20T13:49:06.030Z] Copying: 356/1024 [MB] (31 MBps) [2024-11-20T13:49:06.965Z] Copying: 388/1024 [MB] (31 MBps) [2024-11-20T13:49:07.898Z] Copying: 420/1024 [MB] (32 MBps) [2024-11-20T13:49:08.835Z] Copying: 452/1024 [MB] (31 MBps) [2024-11-20T13:49:09.848Z] Copying: 481/1024 [MB] (29 MBps) [2024-11-20T13:49:10.785Z] Copying: 510/1024 [MB] (28 MBps) [2024-11-20T13:49:11.755Z] Copying: 538/1024 [MB] (28 MBps) [2024-11-20T13:49:12.691Z] Copying: 567/1024 [MB] (28 MBps) [2024-11-20T13:49:13.627Z] Copying: 595/1024 [MB] (28 MBps) [2024-11-20T13:49:15.005Z] Copying: 625/1024 [MB] (29 MBps) [2024-11-20T13:49:15.942Z] Copying: 654/1024 [MB] (29 MBps) [2024-11-20T13:49:16.877Z] Copying: 684/1024 [MB] (29 MBps) [2024-11-20T13:49:17.856Z] Copying: 717/1024 [MB] (33 MBps) [2024-11-20T13:49:18.792Z] Copying: 749/1024 [MB] (31 MBps) [2024-11-20T13:49:19.730Z] Copying: 780/1024 [MB] (30 MBps) [2024-11-20T13:49:20.665Z] Copying: 811/1024 [MB] (31 MBps) [2024-11-20T13:49:21.605Z] Copying: 846/1024 [MB] (34 MBps) [2024-11-20T13:49:23.021Z] Copying: 877/1024 [MB] (31 MBps) [2024-11-20T13:49:23.589Z] Copying: 907/1024 [MB] (29 MBps) [2024-11-20T13:49:24.967Z] Copying: 936/1024 [MB] (29 MBps) [2024-11-20T13:49:25.903Z] Copying: 965/1024 [MB] (28 MBps) [2024-11-20T13:49:26.840Z] Copying: 993/1024 [MB] (28 MBps) [2024-11-20T13:49:26.840Z] Copying: 1020/1024 [MB] (27 MBps) [2024-11-20T13:49:27.494Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-20 13:49:27.146021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.537 [2024-11-20 13:49:27.146096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:15.537 [2024-11-20 13:49:27.146114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:15.537 [2024-11-20 13:49:27.146127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.537 [2024-11-20 13:49:27.146155] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:15.537 [2024-11-20 13:49:27.151755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.537 [2024-11-20 13:49:27.151799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:15.537 [2024-11-20 13:49:27.151823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.586 ms 00:30:15.537 [2024-11-20 13:49:27.151835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.537 [2024-11-20 13:49:27.152069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.537 [2024-11-20 13:49:27.152084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:15.537 [2024-11-20 13:49:27.152096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:30:15.537 [2024-11-20 13:49:27.152107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.537 [2024-11-20 13:49:27.155553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.537 [2024-11-20 13:49:27.155580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:15.537 [2024-11-20 13:49:27.155593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.433 ms 00:30:15.537 [2024-11-20 13:49:27.155612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.537 [2024-11-20 13:49:27.162273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.537 [2024-11-20 13:49:27.162319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:15.537 [2024-11-20 13:49:27.162334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.635 ms 00:30:15.537 [2024-11-20 13:49:27.162345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.537 [2024-11-20 13:49:27.201520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.537 [2024-11-20 13:49:27.201568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:15.537 [2024-11-20 13:49:27.201585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.119 ms 00:30:15.537 [2024-11-20 13:49:27.201595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.537 [2024-11-20 13:49:27.221966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.537 [2024-11-20 13:49:27.222006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:15.537 [2024-11-20 13:49:27.222023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.340 ms 00:30:15.537 [2024-11-20 13:49:27.222034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.537 [2024-11-20 13:49:27.222173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.537 [2024-11-20 13:49:27.222202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:15.537 [2024-11-20 13:49:27.222213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:30:15.537 [2024-11-20 13:49:27.222224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.538 [2024-11-20 13:49:27.258513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.538 [2024-11-20 13:49:27.258563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:15.538 [2024-11-20 13:49:27.258579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.327 ms 00:30:15.538 [2024-11-20 13:49:27.258589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.538 [2024-11-20 13:49:27.296432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.538 [2024-11-20 13:49:27.296520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:15.538 [2024-11-20 13:49:27.296537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.844 ms 00:30:15.538 [2024-11-20 13:49:27.296548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.538 [2024-11-20 13:49:27.332023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.538 [2024-11-20 13:49:27.332069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:15.538 [2024-11-20 13:49:27.332085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.443 ms 00:30:15.538 [2024-11-20 13:49:27.332095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.538 [2024-11-20 13:49:27.367784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.538 [2024-11-20 13:49:27.367828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:15.538 [2024-11-20 13:49:27.367844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.652 ms 00:30:15.538 [2024-11-20 13:49:27.367855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.538 [2024-11-20 13:49:27.367898] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:15.538 [2024-11-20 13:49:27.367916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.367936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.367948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.367959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.367971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.367982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.367993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:15.538 [2024-11-20 13:49:27.368507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.368990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.369001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.369012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:15.539 [2024-11-20 13:49:27.369035] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:15.539 [2024-11-20 13:49:27.369059] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e41e1e69-a5d8-48be-8784-65345996624c 00:30:15.539 [2024-11-20 13:49:27.369077] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:15.539 [2024-11-20 13:49:27.369093] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:15.539 [2024-11-20 13:49:27.369109] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:15.539 [2024-11-20 13:49:27.369122] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:15.539 [2024-11-20 13:49:27.369132] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:15.539 [2024-11-20 13:49:27.369143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:15.539 [2024-11-20 13:49:27.369166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:15.539 [2024-11-20 13:49:27.369175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:15.539 [2024-11-20 13:49:27.369184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:15.539 [2024-11-20 13:49:27.369194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.539 [2024-11-20 13:49:27.369205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:15.539 [2024-11-20 13:49:27.369217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.300 ms 00:30:15.539 [2024-11-20 13:49:27.369227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.539 [2024-11-20 13:49:27.390011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.539 [2024-11-20 13:49:27.390064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:15.539 [2024-11-20 13:49:27.390080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.747 ms 00:30:15.539 [2024-11-20 13:49:27.390092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.539 [2024-11-20 13:49:27.390717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.539 [2024-11-20 13:49:27.390733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:15.539 [2024-11-20 13:49:27.390745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:30:15.539 [2024-11-20 13:49:27.390762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.539 [2024-11-20 13:49:27.443522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.539 [2024-11-20 13:49:27.443581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:15.539 [2024-11-20 13:49:27.443606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.539 [2024-11-20 13:49:27.443619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.539 [2024-11-20 13:49:27.443702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.539 [2024-11-20 13:49:27.443714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:15.539 [2024-11-20 13:49:27.443724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.539 [2024-11-20 13:49:27.443741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.539 [2024-11-20 13:49:27.443831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.539 [2024-11-20 13:49:27.443846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:15.539 [2024-11-20 13:49:27.443857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.539 [2024-11-20 13:49:27.443868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.539 [2024-11-20 13:49:27.443886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.539 [2024-11-20 13:49:27.443897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:15.539 [2024-11-20 13:49:27.443908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.539 [2024-11-20 13:49:27.443918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.799 [2024-11-20 13:49:27.569383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.799 [2024-11-20 13:49:27.569453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:15.799 [2024-11-20 13:49:27.569471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.799 [2024-11-20 13:49:27.569482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.799 [2024-11-20 13:49:27.672911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.799 [2024-11-20 13:49:27.672981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:15.799 [2024-11-20 13:49:27.672996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.799 [2024-11-20 13:49:27.673014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.799 [2024-11-20 13:49:27.673120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.799 [2024-11-20 13:49:27.673133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:15.799 [2024-11-20 13:49:27.673144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.799 [2024-11-20 13:49:27.673154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.799 [2024-11-20 13:49:27.673211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.799 [2024-11-20 13:49:27.673224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:15.799 [2024-11-20 13:49:27.673235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.799 [2024-11-20 13:49:27.673245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.799 [2024-11-20 13:49:27.673366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.799 [2024-11-20 13:49:27.673380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:15.799 [2024-11-20 13:49:27.673391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.799 [2024-11-20 13:49:27.673402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.799 [2024-11-20 13:49:27.673440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.799 [2024-11-20 13:49:27.673452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:15.799 [2024-11-20 13:49:27.673463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.799 [2024-11-20 13:49:27.673474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.799 [2024-11-20 13:49:27.673520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.799 [2024-11-20 13:49:27.673531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:15.799 [2024-11-20 13:49:27.673542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.799 [2024-11-20 13:49:27.673552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.799 [2024-11-20 13:49:27.673594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.799 [2024-11-20 13:49:27.673625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:15.799 [2024-11-20 13:49:27.673636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.799 [2024-11-20 13:49:27.673646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.799 [2024-11-20 13:49:27.673785] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 528.577 ms, result 0 00:30:17.179 00:30:17.179 00:30:17.179 13:49:28 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:18.555 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:18.555 13:49:30 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:18.814 [2024-11-20 13:49:30.600781] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:30:18.814 [2024-11-20 13:49:30.600929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80363 ] 00:30:19.074 [2024-11-20 13:49:30.783594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.074 [2024-11-20 13:49:30.902930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.334 [2024-11-20 13:49:31.267755] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:19.334 [2024-11-20 13:49:31.267828] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:19.595 [2024-11-20 13:49:31.429305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.595 [2024-11-20 13:49:31.429366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:19.595 [2024-11-20 13:49:31.429387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:19.595 [2024-11-20 13:49:31.429398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.595 [2024-11-20 13:49:31.429450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.595 [2024-11-20 13:49:31.429463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:19.595 [2024-11-20 13:49:31.429477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:19.595 [2024-11-20 13:49:31.429487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.595 [2024-11-20 13:49:31.429509] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:19.595 [2024-11-20 13:49:31.430548] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:19.595 [2024-11-20 13:49:31.430578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.595 [2024-11-20 13:49:31.430589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:19.595 [2024-11-20 13:49:31.430616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:30:19.595 [2024-11-20 13:49:31.430627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.595 [2024-11-20 13:49:31.432054] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:19.595 [2024-11-20 13:49:31.450539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.595 [2024-11-20 13:49:31.450582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:19.595 [2024-11-20 13:49:31.450611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.515 ms 00:30:19.595 [2024-11-20 13:49:31.450622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.595 [2024-11-20 13:49:31.450692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.595 [2024-11-20 13:49:31.450705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:19.595 [2024-11-20 13:49:31.450716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:30:19.595 [2024-11-20 13:49:31.450726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.595 [2024-11-20 13:49:31.457452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.595 [2024-11-20 13:49:31.457482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:19.595 [2024-11-20 13:49:31.457495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.663 ms 00:30:19.596 [2024-11-20 13:49:31.457509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.596 [2024-11-20 13:49:31.457589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.596 [2024-11-20 13:49:31.457612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:19.596 [2024-11-20 13:49:31.457624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:19.596 [2024-11-20 13:49:31.457634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.596 [2024-11-20 13:49:31.457676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.596 [2024-11-20 13:49:31.457689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:19.596 [2024-11-20 13:49:31.457699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:19.596 [2024-11-20 13:49:31.457710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.596 [2024-11-20 13:49:31.457740] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:19.596 [2024-11-20 13:49:31.462658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.596 [2024-11-20 13:49:31.462689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:19.596 [2024-11-20 13:49:31.462702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.936 ms 00:30:19.596 [2024-11-20 13:49:31.462716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.596 [2024-11-20 13:49:31.462748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.596 [2024-11-20 13:49:31.462759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:19.596 [2024-11-20 13:49:31.462770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:19.596 [2024-11-20 13:49:31.462779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.596 [2024-11-20 13:49:31.462833] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:19.596 [2024-11-20 13:49:31.462858] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:19.596 [2024-11-20 13:49:31.462893] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:19.596 [2024-11-20 13:49:31.462914] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:19.596 [2024-11-20 13:49:31.463006] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:19.596 [2024-11-20 13:49:31.463020] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:19.596 [2024-11-20 13:49:31.463033] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:19.596 [2024-11-20 13:49:31.463046] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:19.596 [2024-11-20 13:49:31.463057] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:19.596 [2024-11-20 13:49:31.463069] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:19.596 [2024-11-20 13:49:31.463079] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:19.596 [2024-11-20 13:49:31.463089] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:19.596 [2024-11-20 13:49:31.463102] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:19.596 [2024-11-20 13:49:31.463113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.596 [2024-11-20 13:49:31.463123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:19.596 [2024-11-20 13:49:31.463133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:30:19.596 [2024-11-20 13:49:31.463143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.596 [2024-11-20 13:49:31.463213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.596 [2024-11-20 13:49:31.463224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:19.596 [2024-11-20 13:49:31.463234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:30:19.596 [2024-11-20 13:49:31.463245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.596 [2024-11-20 13:49:31.463342] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:19.596 [2024-11-20 13:49:31.463357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:19.596 [2024-11-20 13:49:31.463368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:19.596 [2024-11-20 13:49:31.463377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:19.596 [2024-11-20 13:49:31.463388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:19.596 [2024-11-20 13:49:31.463397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:19.596 [2024-11-20 13:49:31.463406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:19.596 [2024-11-20 13:49:31.463415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:19.596 [2024-11-20 13:49:31.463424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:19.596 [2024-11-20 13:49:31.463433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:19.596 [2024-11-20 13:49:31.463444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:19.596 [2024-11-20 13:49:31.463453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:19.596 [2024-11-20 13:49:31.463462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:19.596 [2024-11-20 13:49:31.463472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:19.596 [2024-11-20 13:49:31.463482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:19.596 [2024-11-20 13:49:31.463502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:19.596 [2024-11-20 13:49:31.463511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:19.596 [2024-11-20 13:49:31.463520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:19.596 [2024-11-20 13:49:31.463529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:19.596 [2024-11-20 13:49:31.463538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:19.596 [2024-11-20 13:49:31.463548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:19.596 [2024-11-20 13:49:31.463557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:19.596 [2024-11-20 13:49:31.463566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:19.596 [2024-11-20 13:49:31.463575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:19.596 [2024-11-20 13:49:31.463584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:19.596 [2024-11-20 13:49:31.463593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:19.596 [2024-11-20 13:49:31.463625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:19.596 [2024-11-20 13:49:31.463635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:19.596 [2024-11-20 13:49:31.463644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:19.596 [2024-11-20 13:49:31.463653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:19.596 [2024-11-20 13:49:31.463662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:19.596 [2024-11-20 13:49:31.463671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:19.596 [2024-11-20 13:49:31.463680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:19.596 [2024-11-20 13:49:31.463690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:19.596 [2024-11-20 13:49:31.463699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:19.596 [2024-11-20 13:49:31.463708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:19.596 [2024-11-20 13:49:31.463717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:19.596 [2024-11-20 13:49:31.463725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:19.596 [2024-11-20 13:49:31.463735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:19.596 [2024-11-20 13:49:31.463743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:19.596 [2024-11-20 13:49:31.463752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:19.596 [2024-11-20 13:49:31.463761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:19.596 [2024-11-20 13:49:31.463771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:19.596 [2024-11-20 13:49:31.463780] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:19.596 [2024-11-20 13:49:31.463790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:19.596 [2024-11-20 13:49:31.463800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:19.596 [2024-11-20 13:49:31.463810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:19.596 [2024-11-20 13:49:31.463819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:19.596 [2024-11-20 13:49:31.463829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:19.596 [2024-11-20 13:49:31.463838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:19.596 [2024-11-20 13:49:31.463846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:19.596 [2024-11-20 13:49:31.463855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:19.596 [2024-11-20 13:49:31.463864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:19.596 [2024-11-20 13:49:31.463875] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:19.596 [2024-11-20 13:49:31.463887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:19.596 [2024-11-20 13:49:31.463899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:19.596 [2024-11-20 13:49:31.463909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:19.596 [2024-11-20 13:49:31.463920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:19.596 [2024-11-20 13:49:31.463930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:19.596 [2024-11-20 13:49:31.463940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:19.596 [2024-11-20 13:49:31.463951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:19.596 [2024-11-20 13:49:31.463961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:19.597 [2024-11-20 13:49:31.463972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:19.597 [2024-11-20 13:49:31.463982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:19.597 [2024-11-20 13:49:31.463992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:19.597 [2024-11-20 13:49:31.464002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:19.597 [2024-11-20 13:49:31.464012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:19.597 [2024-11-20 13:49:31.464022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:19.597 [2024-11-20 13:49:31.464032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:19.597 [2024-11-20 13:49:31.464043] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:19.597 [2024-11-20 13:49:31.464058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:19.597 [2024-11-20 13:49:31.464069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:19.597 [2024-11-20 13:49:31.464080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:19.597 [2024-11-20 13:49:31.464090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:19.597 [2024-11-20 13:49:31.464101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:19.597 [2024-11-20 13:49:31.464111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.597 [2024-11-20 13:49:31.464122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:19.597 [2024-11-20 13:49:31.464132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:30:19.597 [2024-11-20 13:49:31.464142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.597 [2024-11-20 13:49:31.502551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.597 [2024-11-20 13:49:31.502611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:19.597 [2024-11-20 13:49:31.502629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.420 ms 00:30:19.597 [2024-11-20 13:49:31.502640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.597 [2024-11-20 13:49:31.502750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.597 [2024-11-20 13:49:31.502762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:19.597 [2024-11-20 13:49:31.502772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:30:19.597 [2024-11-20 13:49:31.502782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.857 [2024-11-20 13:49:31.563509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.857 [2024-11-20 13:49:31.563559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:19.857 [2024-11-20 13:49:31.563575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.741 ms 00:30:19.857 [2024-11-20 13:49:31.563585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.857 [2024-11-20 13:49:31.563664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.857 [2024-11-20 13:49:31.563676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:19.857 [2024-11-20 13:49:31.563693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:19.857 [2024-11-20 13:49:31.563703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.857 [2024-11-20 13:49:31.564198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.857 [2024-11-20 13:49:31.564220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:19.857 [2024-11-20 13:49:31.564232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:30:19.857 [2024-11-20 13:49:31.564242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.857 [2024-11-20 13:49:31.564368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.857 [2024-11-20 13:49:31.564383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:19.857 [2024-11-20 13:49:31.564393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:30:19.857 [2024-11-20 13:49:31.564409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.857 [2024-11-20 13:49:31.584963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.857 [2024-11-20 13:49:31.585011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:19.857 [2024-11-20 13:49:31.585030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.564 ms 00:30:19.857 [2024-11-20 13:49:31.585041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.857 [2024-11-20 13:49:31.604714] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:19.857 [2024-11-20 13:49:31.604756] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:19.857 [2024-11-20 13:49:31.604772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.857 [2024-11-20 13:49:31.604784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:19.857 [2024-11-20 13:49:31.604797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.623 ms 00:30:19.857 [2024-11-20 13:49:31.604807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.857 [2024-11-20 13:49:31.633977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.857 [2024-11-20 13:49:31.634028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:19.857 [2024-11-20 13:49:31.634047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.165 ms 00:30:19.857 [2024-11-20 13:49:31.634062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.857 [2024-11-20 13:49:31.653146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.857 [2024-11-20 13:49:31.653189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:19.857 [2024-11-20 13:49:31.653204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.038 ms 00:30:19.857 [2024-11-20 13:49:31.653215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.857 [2024-11-20 13:49:31.671204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.857 [2024-11-20 13:49:31.671246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:19.857 [2024-11-20 13:49:31.671262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.975 ms 00:30:19.857 [2024-11-20 13:49:31.671272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.857 [2024-11-20 13:49:31.672028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.857 [2024-11-20 13:49:31.672059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:19.857 [2024-11-20 13:49:31.672072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:30:19.857 [2024-11-20 13:49:31.672087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.857 [2024-11-20 13:49:31.756667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.857 [2024-11-20 13:49:31.756747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:19.857 [2024-11-20 13:49:31.756773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.690 ms 00:30:19.857 [2024-11-20 13:49:31.756785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.857 [2024-11-20 13:49:31.768777] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:19.857 [2024-11-20 13:49:31.772056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.857 [2024-11-20 13:49:31.772103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:19.857 [2024-11-20 13:49:31.772119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.215 ms 00:30:19.857 [2024-11-20 13:49:31.772145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.857 [2024-11-20 13:49:31.772263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.857 [2024-11-20 13:49:31.772278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:19.857 [2024-11-20 13:49:31.772301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:19.858 [2024-11-20 13:49:31.772316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.858 [2024-11-20 13:49:31.772429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.858 [2024-11-20 13:49:31.772442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:19.858 [2024-11-20 13:49:31.772454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:19.858 [2024-11-20 13:49:31.772465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.858 [2024-11-20 13:49:31.772493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.858 [2024-11-20 13:49:31.772516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:19.858 [2024-11-20 13:49:31.772527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:19.858 [2024-11-20 13:49:31.772536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.858 [2024-11-20 13:49:31.772574] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:19.858 [2024-11-20 13:49:31.772586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.858 [2024-11-20 13:49:31.772597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:19.858 [2024-11-20 13:49:31.772607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:19.858 [2024-11-20 13:49:31.772617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.858 [2024-11-20 13:49:31.809342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.858 [2024-11-20 13:49:31.809409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:19.858 [2024-11-20 13:49:31.809435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.759 ms 00:30:19.858 [2024-11-20 13:49:31.809461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.858 [2024-11-20 13:49:31.809590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:19.858 [2024-11-20 13:49:31.809611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:19.858 [2024-11-20 13:49:31.809656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:19.858 [2024-11-20 13:49:31.809673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:19.858 [2024-11-20 13:49:31.811363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 381.955 ms, result 0 00:30:21.236  [2024-11-20T13:49:34.129Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-20T13:49:35.065Z] Copying: 55/1024 [MB] (27 MBps) [2024-11-20T13:49:36.001Z] Copying: 83/1024 [MB] (28 MBps) [2024-11-20T13:49:36.938Z] Copying: 112/1024 [MB] (29 MBps) [2024-11-20T13:49:37.874Z] Copying: 140/1024 [MB] (28 MBps) [2024-11-20T13:49:39.247Z] Copying: 172/1024 [MB] (31 MBps) [2024-11-20T13:49:39.844Z] Copying: 201/1024 [MB] (29 MBps) [2024-11-20T13:49:41.219Z] Copying: 229/1024 [MB] (28 MBps) [2024-11-20T13:49:42.155Z] Copying: 257/1024 [MB] (27 MBps) [2024-11-20T13:49:43.089Z] Copying: 285/1024 [MB] (28 MBps) [2024-11-20T13:49:44.023Z] Copying: 313/1024 [MB] (27 MBps) [2024-11-20T13:49:44.963Z] Copying: 340/1024 [MB] (27 MBps) [2024-11-20T13:49:45.897Z] Copying: 369/1024 [MB] (28 MBps) [2024-11-20T13:49:46.833Z] Copying: 396/1024 [MB] (26 MBps) [2024-11-20T13:49:48.208Z] Copying: 423/1024 [MB] (27 MBps) [2024-11-20T13:49:49.142Z] Copying: 449/1024 [MB] (26 MBps) [2024-11-20T13:49:50.129Z] Copying: 476/1024 [MB] (26 MBps) [2024-11-20T13:49:51.063Z] Copying: 504/1024 [MB] (27 MBps) [2024-11-20T13:49:51.998Z] Copying: 531/1024 [MB] (27 MBps) [2024-11-20T13:49:52.933Z] Copying: 558/1024 [MB] (26 MBps) [2024-11-20T13:49:53.868Z] Copying: 585/1024 [MB] (27 MBps) [2024-11-20T13:49:54.805Z] Copying: 614/1024 [MB] (29 MBps) [2024-11-20T13:49:56.187Z] Copying: 640/1024 [MB] (25 MBps) [2024-11-20T13:49:57.123Z] Copying: 668/1024 [MB] (27 MBps) [2024-11-20T13:49:58.059Z] Copying: 695/1024 [MB] (27 MBps) [2024-11-20T13:49:58.997Z] Copying: 723/1024 [MB] (28 MBps) [2024-11-20T13:49:59.964Z] Copying: 750/1024 [MB] (27 MBps) [2024-11-20T13:50:00.902Z] Copying: 778/1024 [MB] (27 MBps) [2024-11-20T13:50:01.839Z] Copying: 805/1024 [MB] (27 MBps) [2024-11-20T13:50:02.774Z] Copying: 831/1024 [MB] (26 MBps) [2024-11-20T13:50:04.152Z] Copying: 859/1024 [MB] (27 MBps) [2024-11-20T13:50:05.090Z] Copying: 886/1024 [MB] (27 MBps) [2024-11-20T13:50:06.026Z] Copying: 913/1024 [MB] (26 MBps) [2024-11-20T13:50:06.963Z] Copying: 940/1024 [MB] (27 MBps) [2024-11-20T13:50:07.898Z] Copying: 967/1024 [MB] (26 MBps) [2024-11-20T13:50:08.835Z] Copying: 993/1024 [MB] (26 MBps) [2024-11-20T13:50:09.771Z] Copying: 1018/1024 [MB] (24 MBps) [2024-11-20T13:50:09.771Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-20 13:50:09.769213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.814 [2024-11-20 13:50:09.769283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:57.814 [2024-11-20 13:50:09.769302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:57.814 [2024-11-20 13:50:09.769326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.073 [2024-11-20 13:50:09.771282] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:58.073 [2024-11-20 13:50:09.777106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.073 [2024-11-20 13:50:09.777148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:58.073 [2024-11-20 13:50:09.777163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.754 ms 00:30:58.073 [2024-11-20 13:50:09.777176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.073 [2024-11-20 13:50:09.788142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.073 [2024-11-20 13:50:09.788188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:58.073 [2024-11-20 13:50:09.788203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.797 ms 00:30:58.073 [2024-11-20 13:50:09.788223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.073 [2024-11-20 13:50:09.811874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.073 [2024-11-20 13:50:09.811952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:58.073 [2024-11-20 13:50:09.811969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.667 ms 00:30:58.073 [2024-11-20 13:50:09.811981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.073 [2024-11-20 13:50:09.817111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.073 [2024-11-20 13:50:09.817149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:58.073 [2024-11-20 13:50:09.817163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.102 ms 00:30:58.073 [2024-11-20 13:50:09.817174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.073 [2024-11-20 13:50:09.856127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.073 [2024-11-20 13:50:09.856189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:58.073 [2024-11-20 13:50:09.856205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.959 ms 00:30:58.073 [2024-11-20 13:50:09.856215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.074 [2024-11-20 13:50:09.877999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.074 [2024-11-20 13:50:09.878078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:58.074 [2024-11-20 13:50:09.878095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.764 ms 00:30:58.074 [2024-11-20 13:50:09.878107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.074 [2024-11-20 13:50:09.986827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.074 [2024-11-20 13:50:09.986919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:58.074 [2024-11-20 13:50:09.986937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.830 ms 00:30:58.074 [2024-11-20 13:50:09.986965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.074 [2024-11-20 13:50:10.024676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.074 [2024-11-20 13:50:10.024733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:58.074 [2024-11-20 13:50:10.024749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.751 ms 00:30:58.074 [2024-11-20 13:50:10.024761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.335 [2024-11-20 13:50:10.062810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.335 [2024-11-20 13:50:10.062891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:58.335 [2024-11-20 13:50:10.062908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.064 ms 00:30:58.335 [2024-11-20 13:50:10.062919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.335 [2024-11-20 13:50:10.098383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.335 [2024-11-20 13:50:10.098436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:58.335 [2024-11-20 13:50:10.098452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.469 ms 00:30:58.335 [2024-11-20 13:50:10.098463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.335 [2024-11-20 13:50:10.133912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.335 [2024-11-20 13:50:10.133982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:58.335 [2024-11-20 13:50:10.134000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.414 ms 00:30:58.335 [2024-11-20 13:50:10.134010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.335 [2024-11-20 13:50:10.134052] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:58.335 [2024-11-20 13:50:10.134071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 111360 / 261120 wr_cnt: 1 state: open 00:30:58.335 [2024-11-20 13:50:10.134084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:58.335 [2024-11-20 13:50:10.134710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.134998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.135008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.135019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.135035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.135045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.135056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.135066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.135076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.135088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.135099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.135109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.135120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.135130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.135140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.135151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:58.336 [2024-11-20 13:50:10.135170] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:58.336 [2024-11-20 13:50:10.135181] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e41e1e69-a5d8-48be-8784-65345996624c 00:30:58.336 [2024-11-20 13:50:10.135192] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 111360 00:30:58.336 [2024-11-20 13:50:10.135202] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 112320 00:30:58.336 [2024-11-20 13:50:10.135211] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 111360 00:30:58.336 [2024-11-20 13:50:10.135222] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0086 00:30:58.336 [2024-11-20 13:50:10.135232] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:58.336 [2024-11-20 13:50:10.135249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:58.336 [2024-11-20 13:50:10.135271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:58.336 [2024-11-20 13:50:10.135280] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:58.336 [2024-11-20 13:50:10.135289] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:58.336 [2024-11-20 13:50:10.135299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.336 [2024-11-20 13:50:10.135309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:58.336 [2024-11-20 13:50:10.135320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:30:58.336 [2024-11-20 13:50:10.135330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.336 [2024-11-20 13:50:10.154772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.336 [2024-11-20 13:50:10.154819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:58.336 [2024-11-20 13:50:10.154834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.433 ms 00:30:58.336 [2024-11-20 13:50:10.154850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.336 [2024-11-20 13:50:10.155348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.336 [2024-11-20 13:50:10.155367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:58.336 [2024-11-20 13:50:10.155379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:30:58.336 [2024-11-20 13:50:10.155390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.336 [2024-11-20 13:50:10.207050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.336 [2024-11-20 13:50:10.207119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:58.336 [2024-11-20 13:50:10.207136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.336 [2024-11-20 13:50:10.207147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.336 [2024-11-20 13:50:10.207231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.336 [2024-11-20 13:50:10.207243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:58.336 [2024-11-20 13:50:10.207254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.336 [2024-11-20 13:50:10.207265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.336 [2024-11-20 13:50:10.207350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.336 [2024-11-20 13:50:10.207364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:58.336 [2024-11-20 13:50:10.207379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.336 [2024-11-20 13:50:10.207389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.336 [2024-11-20 13:50:10.207408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.336 [2024-11-20 13:50:10.207419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:58.336 [2024-11-20 13:50:10.207429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.336 [2024-11-20 13:50:10.207438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.596 [2024-11-20 13:50:10.334860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.596 [2024-11-20 13:50:10.334924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:58.596 [2024-11-20 13:50:10.334956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.596 [2024-11-20 13:50:10.334967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.596 [2024-11-20 13:50:10.437732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.596 [2024-11-20 13:50:10.437798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:58.597 [2024-11-20 13:50:10.437813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.597 [2024-11-20 13:50:10.437824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.597 [2024-11-20 13:50:10.437925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.597 [2024-11-20 13:50:10.437937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:58.597 [2024-11-20 13:50:10.437949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.597 [2024-11-20 13:50:10.437966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.597 [2024-11-20 13:50:10.438011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.597 [2024-11-20 13:50:10.438023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:58.597 [2024-11-20 13:50:10.438033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.597 [2024-11-20 13:50:10.438042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.597 [2024-11-20 13:50:10.438150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.597 [2024-11-20 13:50:10.438171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:58.597 [2024-11-20 13:50:10.438199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.597 [2024-11-20 13:50:10.438211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.597 [2024-11-20 13:50:10.438265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.597 [2024-11-20 13:50:10.438278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:58.597 [2024-11-20 13:50:10.438289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.597 [2024-11-20 13:50:10.438298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.597 [2024-11-20 13:50:10.438339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.597 [2024-11-20 13:50:10.438350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:58.597 [2024-11-20 13:50:10.438361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.597 [2024-11-20 13:50:10.438370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.597 [2024-11-20 13:50:10.438417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:58.597 [2024-11-20 13:50:10.438429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:58.597 [2024-11-20 13:50:10.438439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:58.597 [2024-11-20 13:50:10.438449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.597 [2024-11-20 13:50:10.438574] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 672.857 ms, result 0 00:30:59.977 00:30:59.977 00:30:59.977 13:50:11 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:00.236 [2024-11-20 13:50:12.021816] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:31:00.236 [2024-11-20 13:50:12.021974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80776 ] 00:31:00.495 [2024-11-20 13:50:12.203695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.495 [2024-11-20 13:50:12.322896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.754 [2024-11-20 13:50:12.693051] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:00.754 [2024-11-20 13:50:12.693128] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:01.015 [2024-11-20 13:50:12.856436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.015 [2024-11-20 13:50:12.856518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:01.015 [2024-11-20 13:50:12.856568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:01.015 [2024-11-20 13:50:12.856589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.015 [2024-11-20 13:50:12.856706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.015 [2024-11-20 13:50:12.856731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:01.015 [2024-11-20 13:50:12.856752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:31:01.015 [2024-11-20 13:50:12.856766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.015 [2024-11-20 13:50:12.856818] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:01.015 [2024-11-20 13:50:12.858058] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:01.015 [2024-11-20 13:50:12.858098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.015 [2024-11-20 13:50:12.858114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:01.015 [2024-11-20 13:50:12.858129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.293 ms 00:31:01.015 [2024-11-20 13:50:12.858160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.015 [2024-11-20 13:50:12.859793] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:01.015 [2024-11-20 13:50:12.880906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.015 [2024-11-20 13:50:12.880960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:01.015 [2024-11-20 13:50:12.880978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.146 ms 00:31:01.015 [2024-11-20 13:50:12.880989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.015 [2024-11-20 13:50:12.881074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.015 [2024-11-20 13:50:12.881087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:01.015 [2024-11-20 13:50:12.881099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:31:01.015 [2024-11-20 13:50:12.881109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.015 [2024-11-20 13:50:12.888718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.015 [2024-11-20 13:50:12.888765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:01.015 [2024-11-20 13:50:12.888781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.533 ms 00:31:01.015 [2024-11-20 13:50:12.888801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.015 [2024-11-20 13:50:12.888894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.015 [2024-11-20 13:50:12.888909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:01.015 [2024-11-20 13:50:12.888920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:31:01.015 [2024-11-20 13:50:12.888931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.015 [2024-11-20 13:50:12.888982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.015 [2024-11-20 13:50:12.888994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:01.015 [2024-11-20 13:50:12.889004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:01.015 [2024-11-20 13:50:12.889015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.015 [2024-11-20 13:50:12.889048] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:01.015 [2024-11-20 13:50:12.893844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.015 [2024-11-20 13:50:12.893879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:01.015 [2024-11-20 13:50:12.893892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.816 ms 00:31:01.015 [2024-11-20 13:50:12.893907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.015 [2024-11-20 13:50:12.893940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.015 [2024-11-20 13:50:12.893952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:01.015 [2024-11-20 13:50:12.893963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:01.015 [2024-11-20 13:50:12.893973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.015 [2024-11-20 13:50:12.894035] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:01.015 [2024-11-20 13:50:12.894060] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:01.015 [2024-11-20 13:50:12.894098] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:01.015 [2024-11-20 13:50:12.894119] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:01.015 [2024-11-20 13:50:12.894220] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:01.015 [2024-11-20 13:50:12.894234] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:01.015 [2024-11-20 13:50:12.894247] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:01.015 [2024-11-20 13:50:12.894261] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:01.015 [2024-11-20 13:50:12.894274] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:01.015 [2024-11-20 13:50:12.894285] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:01.015 [2024-11-20 13:50:12.894295] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:01.015 [2024-11-20 13:50:12.894305] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:01.015 [2024-11-20 13:50:12.894319] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:01.015 [2024-11-20 13:50:12.894329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.015 [2024-11-20 13:50:12.894340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:01.015 [2024-11-20 13:50:12.894350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:31:01.015 [2024-11-20 13:50:12.894360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.015 [2024-11-20 13:50:12.894444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.015 [2024-11-20 13:50:12.894459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:01.015 [2024-11-20 13:50:12.894471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:31:01.015 [2024-11-20 13:50:12.894480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.015 [2024-11-20 13:50:12.894582] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:01.016 [2024-11-20 13:50:12.894726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:01.016 [2024-11-20 13:50:12.894746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:01.016 [2024-11-20 13:50:12.894756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.016 [2024-11-20 13:50:12.894768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:01.016 [2024-11-20 13:50:12.894777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:01.016 [2024-11-20 13:50:12.894787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:01.016 [2024-11-20 13:50:12.894796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:01.016 [2024-11-20 13:50:12.894806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:01.016 [2024-11-20 13:50:12.894815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:01.016 [2024-11-20 13:50:12.894825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:01.016 [2024-11-20 13:50:12.894835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:01.016 [2024-11-20 13:50:12.894845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:01.016 [2024-11-20 13:50:12.894854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:01.016 [2024-11-20 13:50:12.894863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:01.016 [2024-11-20 13:50:12.894885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.016 [2024-11-20 13:50:12.894895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:01.016 [2024-11-20 13:50:12.894904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:01.016 [2024-11-20 13:50:12.894914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.016 [2024-11-20 13:50:12.894923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:01.016 [2024-11-20 13:50:12.894932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:01.016 [2024-11-20 13:50:12.894941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.016 [2024-11-20 13:50:12.894951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:01.016 [2024-11-20 13:50:12.894960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:01.016 [2024-11-20 13:50:12.894969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.016 [2024-11-20 13:50:12.894978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:01.016 [2024-11-20 13:50:12.894988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:01.016 [2024-11-20 13:50:12.894997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.016 [2024-11-20 13:50:12.895006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:01.016 [2024-11-20 13:50:12.895015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:01.016 [2024-11-20 13:50:12.895024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.016 [2024-11-20 13:50:12.895034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:01.016 [2024-11-20 13:50:12.895043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:01.016 [2024-11-20 13:50:12.895051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:01.016 [2024-11-20 13:50:12.895060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:01.016 [2024-11-20 13:50:12.895069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:01.016 [2024-11-20 13:50:12.895078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:01.016 [2024-11-20 13:50:12.895088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:01.016 [2024-11-20 13:50:12.895097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:01.016 [2024-11-20 13:50:12.895112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.016 [2024-11-20 13:50:12.895121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:01.016 [2024-11-20 13:50:12.895130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:01.016 [2024-11-20 13:50:12.895139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.016 [2024-11-20 13:50:12.895148] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:01.016 [2024-11-20 13:50:12.895159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:01.016 [2024-11-20 13:50:12.895168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:01.016 [2024-11-20 13:50:12.895178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.016 [2024-11-20 13:50:12.895188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:01.016 [2024-11-20 13:50:12.895198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:01.016 [2024-11-20 13:50:12.895206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:01.016 [2024-11-20 13:50:12.895215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:01.016 [2024-11-20 13:50:12.895224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:01.016 [2024-11-20 13:50:12.895233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:01.016 [2024-11-20 13:50:12.895244] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:01.016 [2024-11-20 13:50:12.895257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:01.016 [2024-11-20 13:50:12.895269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:01.016 [2024-11-20 13:50:12.895280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:01.016 [2024-11-20 13:50:12.895290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:01.016 [2024-11-20 13:50:12.895300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:01.016 [2024-11-20 13:50:12.895311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:01.016 [2024-11-20 13:50:12.895321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:01.016 [2024-11-20 13:50:12.895331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:01.016 [2024-11-20 13:50:12.895341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:01.016 [2024-11-20 13:50:12.895351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:01.016 [2024-11-20 13:50:12.895361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:01.016 [2024-11-20 13:50:12.895371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:01.016 [2024-11-20 13:50:12.895381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:01.016 [2024-11-20 13:50:12.895391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:01.016 [2024-11-20 13:50:12.895403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:01.016 [2024-11-20 13:50:12.895413] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:01.016 [2024-11-20 13:50:12.895428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:01.016 [2024-11-20 13:50:12.895439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:01.016 [2024-11-20 13:50:12.895449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:01.016 [2024-11-20 13:50:12.895465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:01.016 [2024-11-20 13:50:12.895486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:01.016 [2024-11-20 13:50:12.895504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.016 [2024-11-20 13:50:12.895522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:01.016 [2024-11-20 13:50:12.895534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:31:01.016 [2024-11-20 13:50:12.895544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.016 [2024-11-20 13:50:12.937533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.016 [2024-11-20 13:50:12.937591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:01.016 [2024-11-20 13:50:12.938438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.984 ms 00:31:01.016 [2024-11-20 13:50:12.938455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.016 [2024-11-20 13:50:12.938588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.016 [2024-11-20 13:50:12.938634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:01.016 [2024-11-20 13:50:12.938656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:31:01.016 [2024-11-20 13:50:12.938675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.277 [2024-11-20 13:50:12.997301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.277 [2024-11-20 13:50:12.997367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:01.277 [2024-11-20 13:50:12.997385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.600 ms 00:31:01.277 [2024-11-20 13:50:12.997396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.277 [2024-11-20 13:50:12.997468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.277 [2024-11-20 13:50:12.997480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:01.277 [2024-11-20 13:50:12.997496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:01.277 [2024-11-20 13:50:12.997507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.277 [2024-11-20 13:50:12.998039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.277 [2024-11-20 13:50:12.998056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:01.277 [2024-11-20 13:50:12.998068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:31:01.277 [2024-11-20 13:50:12.998078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.277 [2024-11-20 13:50:12.998220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.277 [2024-11-20 13:50:12.998235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:01.277 [2024-11-20 13:50:12.998246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:31:01.277 [2024-11-20 13:50:12.998263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.277 [2024-11-20 13:50:13.017552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.277 [2024-11-20 13:50:13.017616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:01.277 [2024-11-20 13:50:13.017639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.294 ms 00:31:01.277 [2024-11-20 13:50:13.017650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.277 [2024-11-20 13:50:13.038283] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:01.277 [2024-11-20 13:50:13.038359] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:01.277 [2024-11-20 13:50:13.038380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.277 [2024-11-20 13:50:13.038393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:01.277 [2024-11-20 13:50:13.038410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.611 ms 00:31:01.277 [2024-11-20 13:50:13.038421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.277 [2024-11-20 13:50:13.069582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.277 [2024-11-20 13:50:13.069641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:01.277 [2024-11-20 13:50:13.069659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.134 ms 00:31:01.277 [2024-11-20 13:50:13.069671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.277 [2024-11-20 13:50:13.088594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.277 [2024-11-20 13:50:13.088660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:01.277 [2024-11-20 13:50:13.088676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.900 ms 00:31:01.277 [2024-11-20 13:50:13.088687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.277 [2024-11-20 13:50:13.107226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.277 [2024-11-20 13:50:13.107270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:01.277 [2024-11-20 13:50:13.107286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.523 ms 00:31:01.277 [2024-11-20 13:50:13.107298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.277 [2024-11-20 13:50:13.108169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.277 [2024-11-20 13:50:13.108203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:01.277 [2024-11-20 13:50:13.108215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:31:01.277 [2024-11-20 13:50:13.108233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.277 [2024-11-20 13:50:13.197174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.277 [2024-11-20 13:50:13.197251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:01.277 [2024-11-20 13:50:13.197277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.054 ms 00:31:01.277 [2024-11-20 13:50:13.197288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.277 [2024-11-20 13:50:13.208679] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:01.277 [2024-11-20 13:50:13.211888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.277 [2024-11-20 13:50:13.211920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:01.277 [2024-11-20 13:50:13.211936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.555 ms 00:31:01.277 [2024-11-20 13:50:13.211947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.277 [2024-11-20 13:50:13.212059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.277 [2024-11-20 13:50:13.212072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:01.277 [2024-11-20 13:50:13.212085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:01.277 [2024-11-20 13:50:13.212099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.277 [2024-11-20 13:50:13.213638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.277 [2024-11-20 13:50:13.213679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:01.277 [2024-11-20 13:50:13.213691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.478 ms 00:31:01.277 [2024-11-20 13:50:13.213702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.277 [2024-11-20 13:50:13.213747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.277 [2024-11-20 13:50:13.213759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:01.277 [2024-11-20 13:50:13.213771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:01.277 [2024-11-20 13:50:13.213781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.277 [2024-11-20 13:50:13.213824] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:01.277 [2024-11-20 13:50:13.213836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.277 [2024-11-20 13:50:13.213846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:01.277 [2024-11-20 13:50:13.213857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:01.277 [2024-11-20 13:50:13.213867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.537 [2024-11-20 13:50:13.251286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.537 [2024-11-20 13:50:13.251347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:01.537 [2024-11-20 13:50:13.251362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.459 ms 00:31:01.537 [2024-11-20 13:50:13.251379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.537 [2024-11-20 13:50:13.251459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.537 [2024-11-20 13:50:13.251472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:01.537 [2024-11-20 13:50:13.251484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:31:01.537 [2024-11-20 13:50:13.251494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.537 [2024-11-20 13:50:13.252620] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 396.357 ms, result 0 00:31:02.914  [2024-11-20T13:50:15.809Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-20T13:50:16.746Z] Copying: 54/1024 [MB] (28 MBps) [2024-11-20T13:50:17.683Z] Copying: 82/1024 [MB] (28 MBps) [2024-11-20T13:50:18.620Z] Copying: 111/1024 [MB] (28 MBps) [2024-11-20T13:50:19.557Z] Copying: 139/1024 [MB] (27 MBps) [2024-11-20T13:50:20.548Z] Copying: 167/1024 [MB] (28 MBps) [2024-11-20T13:50:21.487Z] Copying: 197/1024 [MB] (29 MBps) [2024-11-20T13:50:22.867Z] Copying: 225/1024 [MB] (28 MBps) [2024-11-20T13:50:23.803Z] Copying: 254/1024 [MB] (28 MBps) [2024-11-20T13:50:24.741Z] Copying: 283/1024 [MB] (28 MBps) [2024-11-20T13:50:25.676Z] Copying: 311/1024 [MB] (28 MBps) [2024-11-20T13:50:26.612Z] Copying: 339/1024 [MB] (27 MBps) [2024-11-20T13:50:27.549Z] Copying: 367/1024 [MB] (28 MBps) [2024-11-20T13:50:28.485Z] Copying: 396/1024 [MB] (28 MBps) [2024-11-20T13:50:29.862Z] Copying: 424/1024 [MB] (28 MBps) [2024-11-20T13:50:30.800Z] Copying: 454/1024 [MB] (29 MBps) [2024-11-20T13:50:31.737Z] Copying: 481/1024 [MB] (27 MBps) [2024-11-20T13:50:32.675Z] Copying: 510/1024 [MB] (28 MBps) [2024-11-20T13:50:33.637Z] Copying: 537/1024 [MB] (27 MBps) [2024-11-20T13:50:34.577Z] Copying: 565/1024 [MB] (28 MBps) [2024-11-20T13:50:35.516Z] Copying: 595/1024 [MB] (29 MBps) [2024-11-20T13:50:36.450Z] Copying: 623/1024 [MB] (27 MBps) [2024-11-20T13:50:37.825Z] Copying: 651/1024 [MB] (27 MBps) [2024-11-20T13:50:38.760Z] Copying: 679/1024 [MB] (28 MBps) [2024-11-20T13:50:39.697Z] Copying: 706/1024 [MB] (26 MBps) [2024-11-20T13:50:40.632Z] Copying: 733/1024 [MB] (27 MBps) [2024-11-20T13:50:41.570Z] Copying: 761/1024 [MB] (27 MBps) [2024-11-20T13:50:42.506Z] Copying: 789/1024 [MB] (27 MBps) [2024-11-20T13:50:43.442Z] Copying: 817/1024 [MB] (28 MBps) [2024-11-20T13:50:44.818Z] Copying: 846/1024 [MB] (28 MBps) [2024-11-20T13:50:45.763Z] Copying: 875/1024 [MB] (29 MBps) [2024-11-20T13:50:46.699Z] Copying: 905/1024 [MB] (29 MBps) [2024-11-20T13:50:47.632Z] Copying: 934/1024 [MB] (28 MBps) [2024-11-20T13:50:48.565Z] Copying: 962/1024 [MB] (28 MBps) [2024-11-20T13:50:49.607Z] Copying: 991/1024 [MB] (29 MBps) [2024-11-20T13:50:49.607Z] Copying: 1021/1024 [MB] (29 MBps) [2024-11-20T13:50:50.173Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-20 13:50:49.967151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.216 [2024-11-20 13:50:49.967241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:38.216 [2024-11-20 13:50:49.967268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:38.216 [2024-11-20 13:50:49.967294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.216 [2024-11-20 13:50:49.967322] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:38.216 [2024-11-20 13:50:49.972556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.216 [2024-11-20 13:50:49.972623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:38.216 [2024-11-20 13:50:49.972641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.219 ms 00:31:38.216 [2024-11-20 13:50:49.972654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.216 [2024-11-20 13:50:49.973014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.216 [2024-11-20 13:50:49.973039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:38.216 [2024-11-20 13:50:49.973052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:31:38.216 [2024-11-20 13:50:49.973064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.216 [2024-11-20 13:50:49.977956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.216 [2024-11-20 13:50:49.978005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:38.216 [2024-11-20 13:50:49.978019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.874 ms 00:31:38.216 [2024-11-20 13:50:49.978031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.216 [2024-11-20 13:50:49.984553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.216 [2024-11-20 13:50:49.984625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:38.216 [2024-11-20 13:50:49.984640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.489 ms 00:31:38.216 [2024-11-20 13:50:49.984652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.216 [2024-11-20 13:50:50.023645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.216 [2024-11-20 13:50:50.023699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:38.216 [2024-11-20 13:50:50.023717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.831 ms 00:31:38.216 [2024-11-20 13:50:50.023727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.216 [2024-11-20 13:50:50.045031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.216 [2024-11-20 13:50:50.045082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:38.216 [2024-11-20 13:50:50.045099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.287 ms 00:31:38.216 [2024-11-20 13:50:50.045110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.476 [2024-11-20 13:50:50.184812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.476 [2024-11-20 13:50:50.184920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:38.476 [2024-11-20 13:50:50.184942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 139.870 ms 00:31:38.476 [2024-11-20 13:50:50.184955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.476 [2024-11-20 13:50:50.222643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.476 [2024-11-20 13:50:50.222703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:38.476 [2024-11-20 13:50:50.222718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.726 ms 00:31:38.476 [2024-11-20 13:50:50.222728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.476 [2024-11-20 13:50:50.260223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.476 [2024-11-20 13:50:50.260293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:38.476 [2024-11-20 13:50:50.260326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.508 ms 00:31:38.476 [2024-11-20 13:50:50.260337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.476 [2024-11-20 13:50:50.296814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.476 [2024-11-20 13:50:50.296869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:38.476 [2024-11-20 13:50:50.296887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.472 ms 00:31:38.476 [2024-11-20 13:50:50.296898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.476 [2024-11-20 13:50:50.332660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.476 [2024-11-20 13:50:50.332705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:38.476 [2024-11-20 13:50:50.332720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.722 ms 00:31:38.476 [2024-11-20 13:50:50.332730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.476 [2024-11-20 13:50:50.332775] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:38.476 [2024-11-20 13:50:50.332793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:31:38.476 [2024-11-20 13:50:50.332806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.332817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.332828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.332840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.332851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.332862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.332872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.332883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.332894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.332905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.332916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.332926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.332937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.332947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.332960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.332970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.332981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.332992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:38.476 [2024-11-20 13:50:50.333303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:38.477 [2024-11-20 13:50:50.333884] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:38.477 [2024-11-20 13:50:50.333895] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e41e1e69-a5d8-48be-8784-65345996624c 00:31:38.477 [2024-11-20 13:50:50.333906] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:31:38.477 [2024-11-20 13:50:50.333918] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 20672 00:31:38.477 [2024-11-20 13:50:50.333929] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 19712 00:31:38.477 [2024-11-20 13:50:50.333940] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0487 00:31:38.477 [2024-11-20 13:50:50.333950] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:38.477 [2024-11-20 13:50:50.333967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:38.477 [2024-11-20 13:50:50.333977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:38.477 [2024-11-20 13:50:50.333999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:38.477 [2024-11-20 13:50:50.334008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:38.477 [2024-11-20 13:50:50.334018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.477 [2024-11-20 13:50:50.334029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:38.477 [2024-11-20 13:50:50.334040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.247 ms 00:31:38.477 [2024-11-20 13:50:50.334050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.477 [2024-11-20 13:50:50.353926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.477 [2024-11-20 13:50:50.353964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:38.477 [2024-11-20 13:50:50.353978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.868 ms 00:31:38.477 [2024-11-20 13:50:50.353995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.477 [2024-11-20 13:50:50.354531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.477 [2024-11-20 13:50:50.354557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:38.477 [2024-11-20 13:50:50.354569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:31:38.477 [2024-11-20 13:50:50.354580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.477 [2024-11-20 13:50:50.406413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.477 [2024-11-20 13:50:50.406492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:38.477 [2024-11-20 13:50:50.406509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.477 [2024-11-20 13:50:50.406521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.477 [2024-11-20 13:50:50.406608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.477 [2024-11-20 13:50:50.406620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:38.477 [2024-11-20 13:50:50.406631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.477 [2024-11-20 13:50:50.406641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.477 [2024-11-20 13:50:50.406748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.477 [2024-11-20 13:50:50.406762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:38.477 [2024-11-20 13:50:50.406778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.477 [2024-11-20 13:50:50.406789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.477 [2024-11-20 13:50:50.406807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.477 [2024-11-20 13:50:50.406819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:38.477 [2024-11-20 13:50:50.406829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.477 [2024-11-20 13:50:50.406839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.737 [2024-11-20 13:50:50.532938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.737 [2024-11-20 13:50:50.533003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:38.737 [2024-11-20 13:50:50.533029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.737 [2024-11-20 13:50:50.533040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.737 [2024-11-20 13:50:50.636672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.737 [2024-11-20 13:50:50.636735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:38.737 [2024-11-20 13:50:50.636752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.737 [2024-11-20 13:50:50.636763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.737 [2024-11-20 13:50:50.636865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.737 [2024-11-20 13:50:50.636878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:38.737 [2024-11-20 13:50:50.636889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.737 [2024-11-20 13:50:50.636905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.737 [2024-11-20 13:50:50.636959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.737 [2024-11-20 13:50:50.636971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:38.737 [2024-11-20 13:50:50.636981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.737 [2024-11-20 13:50:50.636992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.737 [2024-11-20 13:50:50.637120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.737 [2024-11-20 13:50:50.637134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:38.737 [2024-11-20 13:50:50.637144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.737 [2024-11-20 13:50:50.637167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.737 [2024-11-20 13:50:50.637211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.737 [2024-11-20 13:50:50.637224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:38.737 [2024-11-20 13:50:50.637235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.737 [2024-11-20 13:50:50.637245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.737 [2024-11-20 13:50:50.637286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.737 [2024-11-20 13:50:50.637298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:38.737 [2024-11-20 13:50:50.637308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.737 [2024-11-20 13:50:50.637318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.737 [2024-11-20 13:50:50.637364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.737 [2024-11-20 13:50:50.637376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:38.737 [2024-11-20 13:50:50.637387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.737 [2024-11-20 13:50:50.637397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.737 [2024-11-20 13:50:50.637526] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 671.426 ms, result 0 00:31:40.114 00:31:40.114 00:31:40.114 13:50:51 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:42.015 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:42.015 13:50:53 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:42.015 13:50:53 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:31:42.015 13:50:53 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:42.015 13:50:53 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:42.015 13:50:53 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:42.015 13:50:53 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79351 00:31:42.015 13:50:53 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79351 ']' 00:31:42.015 13:50:53 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79351 00:31:42.015 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79351) - No such process 00:31:42.015 Process with pid 79351 is not found 00:31:42.015 13:50:53 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79351 is not found' 00:31:42.015 Remove shared memory files 00:31:42.015 13:50:53 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:31:42.015 13:50:53 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:42.015 13:50:53 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:31:42.015 13:50:53 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:31:42.015 13:50:53 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:31:42.015 13:50:53 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:42.015 13:50:53 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:31:42.015 00:31:42.015 real 3m1.375s 00:31:42.015 user 2m48.472s 00:31:42.015 sys 0m15.010s 00:31:42.015 13:50:53 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:42.015 13:50:53 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:31:42.015 ************************************ 00:31:42.015 END TEST ftl_restore 00:31:42.015 ************************************ 00:31:42.015 13:50:53 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:31:42.015 13:50:53 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:42.015 13:50:53 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:42.015 13:50:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:42.015 ************************************ 00:31:42.015 START TEST ftl_dirty_shutdown 00:31:42.015 ************************************ 00:31:42.015 13:50:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:31:42.276 * Looking for test storage... 00:31:42.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:42.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.276 --rc genhtml_branch_coverage=1 00:31:42.276 --rc genhtml_function_coverage=1 00:31:42.276 --rc genhtml_legend=1 00:31:42.276 --rc geninfo_all_blocks=1 00:31:42.276 --rc geninfo_unexecuted_blocks=1 00:31:42.276 00:31:42.276 ' 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:42.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.276 --rc genhtml_branch_coverage=1 00:31:42.276 --rc genhtml_function_coverage=1 00:31:42.276 --rc genhtml_legend=1 00:31:42.276 --rc geninfo_all_blocks=1 00:31:42.276 --rc geninfo_unexecuted_blocks=1 00:31:42.276 00:31:42.276 ' 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:42.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.276 --rc genhtml_branch_coverage=1 00:31:42.276 --rc genhtml_function_coverage=1 00:31:42.276 --rc genhtml_legend=1 00:31:42.276 --rc geninfo_all_blocks=1 00:31:42.276 --rc geninfo_unexecuted_blocks=1 00:31:42.276 00:31:42.276 ' 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:42.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.276 --rc genhtml_branch_coverage=1 00:31:42.276 --rc genhtml_function_coverage=1 00:31:42.276 --rc genhtml_legend=1 00:31:42.276 --rc geninfo_all_blocks=1 00:31:42.276 --rc geninfo_unexecuted_blocks=1 00:31:42.276 00:31:42.276 ' 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81256 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81256 00:31:42.276 13:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81256 ']' 00:31:42.277 13:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.277 13:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:42.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.277 13:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.277 13:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:42.277 13:50:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:42.535 [2024-11-20 13:50:54.273460] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:31:42.535 [2024-11-20 13:50:54.273614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81256 ] 00:31:42.535 [2024-11-20 13:50:54.458052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.795 [2024-11-20 13:50:54.586340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.780 13:50:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:43.780 13:50:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:43.780 13:50:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:43.780 13:50:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:31:43.780 13:50:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:43.780 13:50:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:31:43.780 13:50:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:31:43.780 13:50:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:44.045 13:50:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:44.045 13:50:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:31:44.045 13:50:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:44.045 13:50:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:44.045 13:50:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:44.045 13:50:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:44.045 13:50:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:44.045 13:50:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:44.304 13:50:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:44.304 { 00:31:44.304 "name": "nvme0n1", 00:31:44.304 "aliases": [ 00:31:44.304 "36996b42-5313-4254-8641-53a01f0f9016" 00:31:44.304 ], 00:31:44.304 "product_name": "NVMe disk", 00:31:44.304 "block_size": 4096, 00:31:44.304 "num_blocks": 1310720, 00:31:44.304 "uuid": "36996b42-5313-4254-8641-53a01f0f9016", 00:31:44.304 "numa_id": -1, 00:31:44.304 "assigned_rate_limits": { 00:31:44.304 "rw_ios_per_sec": 0, 00:31:44.304 "rw_mbytes_per_sec": 0, 00:31:44.304 "r_mbytes_per_sec": 0, 00:31:44.304 "w_mbytes_per_sec": 0 00:31:44.304 }, 00:31:44.304 "claimed": true, 00:31:44.304 "claim_type": "read_many_write_one", 00:31:44.304 "zoned": false, 00:31:44.304 "supported_io_types": { 00:31:44.304 "read": true, 00:31:44.304 "write": true, 00:31:44.304 "unmap": true, 00:31:44.304 "flush": true, 00:31:44.304 "reset": true, 00:31:44.304 "nvme_admin": true, 00:31:44.304 "nvme_io": true, 00:31:44.304 "nvme_io_md": false, 00:31:44.304 "write_zeroes": true, 00:31:44.304 "zcopy": false, 00:31:44.304 "get_zone_info": false, 00:31:44.304 "zone_management": false, 00:31:44.304 "zone_append": false, 00:31:44.304 "compare": true, 00:31:44.304 "compare_and_write": false, 00:31:44.304 "abort": true, 00:31:44.304 "seek_hole": false, 00:31:44.304 "seek_data": false, 00:31:44.304 "copy": true, 00:31:44.304 "nvme_iov_md": false 00:31:44.304 }, 00:31:44.304 "driver_specific": { 00:31:44.304 "nvme": [ 00:31:44.304 { 00:31:44.304 "pci_address": "0000:00:11.0", 00:31:44.304 "trid": { 00:31:44.304 "trtype": "PCIe", 00:31:44.304 "traddr": "0000:00:11.0" 00:31:44.304 }, 00:31:44.304 "ctrlr_data": { 00:31:44.304 "cntlid": 0, 00:31:44.304 "vendor_id": "0x1b36", 00:31:44.304 "model_number": "QEMU NVMe Ctrl", 00:31:44.304 "serial_number": "12341", 00:31:44.304 "firmware_revision": "8.0.0", 00:31:44.305 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:44.305 "oacs": { 00:31:44.305 "security": 0, 00:31:44.305 "format": 1, 00:31:44.305 "firmware": 0, 00:31:44.305 "ns_manage": 1 00:31:44.305 }, 00:31:44.305 "multi_ctrlr": false, 00:31:44.305 "ana_reporting": false 00:31:44.305 }, 00:31:44.305 "vs": { 00:31:44.305 "nvme_version": "1.4" 00:31:44.305 }, 00:31:44.305 "ns_data": { 00:31:44.305 "id": 1, 00:31:44.305 "can_share": false 00:31:44.305 } 00:31:44.305 } 00:31:44.305 ], 00:31:44.305 "mp_policy": "active_passive" 00:31:44.305 } 00:31:44.305 } 00:31:44.305 ]' 00:31:44.305 13:50:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:44.305 13:50:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:44.305 13:50:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:44.305 13:50:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:44.305 13:50:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:44.305 13:50:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:31:44.305 13:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:31:44.305 13:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:44.305 13:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:31:44.305 13:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:44.305 13:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:44.563 13:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=bafcdb6c-4d27-4309-b9ee-ca9fca1513db 00:31:44.563 13:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:31:44.563 13:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bafcdb6c-4d27-4309-b9ee-ca9fca1513db 00:31:44.822 13:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:45.081 13:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=835401ba-817e-4c6d-b532-9f526eee1ddd 00:31:45.081 13:50:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 835401ba-817e-4c6d-b532-9f526eee1ddd 00:31:45.341 13:50:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=435d8f65-7309-4699-9f3f-b565296d60dd 00:31:45.341 13:50:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:31:45.341 13:50:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 435d8f65-7309-4699-9f3f-b565296d60dd 00:31:45.341 13:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:31:45.341 13:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:45.341 13:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=435d8f65-7309-4699-9f3f-b565296d60dd 00:31:45.341 13:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:31:45.341 13:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 435d8f65-7309-4699-9f3f-b565296d60dd 00:31:45.341 13:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=435d8f65-7309-4699-9f3f-b565296d60dd 00:31:45.341 13:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:45.341 13:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:45.341 13:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:45.341 13:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 435d8f65-7309-4699-9f3f-b565296d60dd 00:31:45.599 13:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:45.599 { 00:31:45.599 "name": "435d8f65-7309-4699-9f3f-b565296d60dd", 00:31:45.599 "aliases": [ 00:31:45.599 "lvs/nvme0n1p0" 00:31:45.599 ], 00:31:45.599 "product_name": "Logical Volume", 00:31:45.599 "block_size": 4096, 00:31:45.599 "num_blocks": 26476544, 00:31:45.599 "uuid": "435d8f65-7309-4699-9f3f-b565296d60dd", 00:31:45.599 "assigned_rate_limits": { 00:31:45.599 "rw_ios_per_sec": 0, 00:31:45.599 "rw_mbytes_per_sec": 0, 00:31:45.599 "r_mbytes_per_sec": 0, 00:31:45.599 "w_mbytes_per_sec": 0 00:31:45.599 }, 00:31:45.599 "claimed": false, 00:31:45.599 "zoned": false, 00:31:45.599 "supported_io_types": { 00:31:45.599 "read": true, 00:31:45.599 "write": true, 00:31:45.599 "unmap": true, 00:31:45.599 "flush": false, 00:31:45.599 "reset": true, 00:31:45.599 "nvme_admin": false, 00:31:45.599 "nvme_io": false, 00:31:45.599 "nvme_io_md": false, 00:31:45.599 "write_zeroes": true, 00:31:45.599 "zcopy": false, 00:31:45.599 "get_zone_info": false, 00:31:45.599 "zone_management": false, 00:31:45.599 "zone_append": false, 00:31:45.599 "compare": false, 00:31:45.599 "compare_and_write": false, 00:31:45.599 "abort": false, 00:31:45.599 "seek_hole": true, 00:31:45.599 "seek_data": true, 00:31:45.599 "copy": false, 00:31:45.599 "nvme_iov_md": false 00:31:45.599 }, 00:31:45.599 "driver_specific": { 00:31:45.599 "lvol": { 00:31:45.600 "lvol_store_uuid": "835401ba-817e-4c6d-b532-9f526eee1ddd", 00:31:45.600 "base_bdev": "nvme0n1", 00:31:45.600 "thin_provision": true, 00:31:45.600 "num_allocated_clusters": 0, 00:31:45.600 "snapshot": false, 00:31:45.600 "clone": false, 00:31:45.600 "esnap_clone": false 00:31:45.600 } 00:31:45.600 } 00:31:45.600 } 00:31:45.600 ]' 00:31:45.600 13:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:45.600 13:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:45.600 13:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:45.600 13:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:45.600 13:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:45.600 13:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:31:45.600 13:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:31:45.600 13:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:31:45.600 13:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:45.858 13:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:45.858 13:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:45.858 13:50:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 435d8f65-7309-4699-9f3f-b565296d60dd 00:31:45.858 13:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=435d8f65-7309-4699-9f3f-b565296d60dd 00:31:45.858 13:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:45.858 13:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:45.858 13:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:45.858 13:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 435d8f65-7309-4699-9f3f-b565296d60dd 00:31:46.118 13:50:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:46.118 { 00:31:46.118 "name": "435d8f65-7309-4699-9f3f-b565296d60dd", 00:31:46.118 "aliases": [ 00:31:46.118 "lvs/nvme0n1p0" 00:31:46.118 ], 00:31:46.118 "product_name": "Logical Volume", 00:31:46.118 "block_size": 4096, 00:31:46.118 "num_blocks": 26476544, 00:31:46.118 "uuid": "435d8f65-7309-4699-9f3f-b565296d60dd", 00:31:46.118 "assigned_rate_limits": { 00:31:46.118 "rw_ios_per_sec": 0, 00:31:46.118 "rw_mbytes_per_sec": 0, 00:31:46.118 "r_mbytes_per_sec": 0, 00:31:46.118 "w_mbytes_per_sec": 0 00:31:46.118 }, 00:31:46.118 "claimed": false, 00:31:46.118 "zoned": false, 00:31:46.118 "supported_io_types": { 00:31:46.118 "read": true, 00:31:46.118 "write": true, 00:31:46.118 "unmap": true, 00:31:46.118 "flush": false, 00:31:46.118 "reset": true, 00:31:46.118 "nvme_admin": false, 00:31:46.118 "nvme_io": false, 00:31:46.118 "nvme_io_md": false, 00:31:46.118 "write_zeroes": true, 00:31:46.118 "zcopy": false, 00:31:46.118 "get_zone_info": false, 00:31:46.118 "zone_management": false, 00:31:46.118 "zone_append": false, 00:31:46.118 "compare": false, 00:31:46.118 "compare_and_write": false, 00:31:46.118 "abort": false, 00:31:46.118 "seek_hole": true, 00:31:46.118 "seek_data": true, 00:31:46.118 "copy": false, 00:31:46.118 "nvme_iov_md": false 00:31:46.118 }, 00:31:46.118 "driver_specific": { 00:31:46.118 "lvol": { 00:31:46.118 "lvol_store_uuid": "835401ba-817e-4c6d-b532-9f526eee1ddd", 00:31:46.118 "base_bdev": "nvme0n1", 00:31:46.118 "thin_provision": true, 00:31:46.118 "num_allocated_clusters": 0, 00:31:46.118 "snapshot": false, 00:31:46.118 "clone": false, 00:31:46.118 "esnap_clone": false 00:31:46.118 } 00:31:46.118 } 00:31:46.118 } 00:31:46.118 ]' 00:31:46.118 13:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:46.118 13:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:46.118 13:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:46.118 13:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:46.118 13:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:46.118 13:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:31:46.376 13:50:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:31:46.376 13:50:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:46.376 13:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:31:46.376 13:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 435d8f65-7309-4699-9f3f-b565296d60dd 00:31:46.376 13:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=435d8f65-7309-4699-9f3f-b565296d60dd 00:31:46.376 13:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:46.376 13:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:46.376 13:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:46.376 13:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 435d8f65-7309-4699-9f3f-b565296d60dd 00:31:46.634 13:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:46.634 { 00:31:46.634 "name": "435d8f65-7309-4699-9f3f-b565296d60dd", 00:31:46.634 "aliases": [ 00:31:46.634 "lvs/nvme0n1p0" 00:31:46.634 ], 00:31:46.634 "product_name": "Logical Volume", 00:31:46.634 "block_size": 4096, 00:31:46.634 "num_blocks": 26476544, 00:31:46.634 "uuid": "435d8f65-7309-4699-9f3f-b565296d60dd", 00:31:46.634 "assigned_rate_limits": { 00:31:46.634 "rw_ios_per_sec": 0, 00:31:46.634 "rw_mbytes_per_sec": 0, 00:31:46.634 "r_mbytes_per_sec": 0, 00:31:46.634 "w_mbytes_per_sec": 0 00:31:46.634 }, 00:31:46.634 "claimed": false, 00:31:46.634 "zoned": false, 00:31:46.634 "supported_io_types": { 00:31:46.634 "read": true, 00:31:46.634 "write": true, 00:31:46.634 "unmap": true, 00:31:46.634 "flush": false, 00:31:46.634 "reset": true, 00:31:46.634 "nvme_admin": false, 00:31:46.634 "nvme_io": false, 00:31:46.634 "nvme_io_md": false, 00:31:46.634 "write_zeroes": true, 00:31:46.634 "zcopy": false, 00:31:46.634 "get_zone_info": false, 00:31:46.634 "zone_management": false, 00:31:46.634 "zone_append": false, 00:31:46.634 "compare": false, 00:31:46.634 "compare_and_write": false, 00:31:46.634 "abort": false, 00:31:46.634 "seek_hole": true, 00:31:46.634 "seek_data": true, 00:31:46.634 "copy": false, 00:31:46.634 "nvme_iov_md": false 00:31:46.634 }, 00:31:46.634 "driver_specific": { 00:31:46.634 "lvol": { 00:31:46.634 "lvol_store_uuid": "835401ba-817e-4c6d-b532-9f526eee1ddd", 00:31:46.634 "base_bdev": "nvme0n1", 00:31:46.634 "thin_provision": true, 00:31:46.634 "num_allocated_clusters": 0, 00:31:46.634 "snapshot": false, 00:31:46.635 "clone": false, 00:31:46.635 "esnap_clone": false 00:31:46.635 } 00:31:46.635 } 00:31:46.635 } 00:31:46.635 ]' 00:31:46.635 13:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:46.635 13:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:46.635 13:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:46.894 13:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:46.894 13:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:46.894 13:50:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:31:46.894 13:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:31:46.894 13:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 435d8f65-7309-4699-9f3f-b565296d60dd --l2p_dram_limit 10' 00:31:46.894 13:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:31:46.894 13:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:31:46.894 13:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:46.894 13:50:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 435d8f65-7309-4699-9f3f-b565296d60dd --l2p_dram_limit 10 -c nvc0n1p0 00:31:46.894 [2024-11-20 13:50:58.815575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.894 [2024-11-20 13:50:58.815645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:46.894 [2024-11-20 13:50:58.815666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:46.894 [2024-11-20 13:50:58.815678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.894 [2024-11-20 13:50:58.815756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.894 [2024-11-20 13:50:58.815770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:46.894 [2024-11-20 13:50:58.815784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:31:46.894 [2024-11-20 13:50:58.815796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.894 [2024-11-20 13:50:58.815820] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:46.894 [2024-11-20 13:50:58.816763] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:46.894 [2024-11-20 13:50:58.816795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.894 [2024-11-20 13:50:58.816806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:46.894 [2024-11-20 13:50:58.816820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:31:46.894 [2024-11-20 13:50:58.816831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.895 [2024-11-20 13:50:58.816929] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID da2d074f-2de8-4281-ab42-5ee8adeba823 00:31:46.895 [2024-11-20 13:50:58.818405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.895 [2024-11-20 13:50:58.818433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:46.895 [2024-11-20 13:50:58.818448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:31:46.895 [2024-11-20 13:50:58.818465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.895 [2024-11-20 13:50:58.825883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.895 [2024-11-20 13:50:58.825925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:46.895 [2024-11-20 13:50:58.825938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.371 ms 00:31:46.895 [2024-11-20 13:50:58.825951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.895 [2024-11-20 13:50:58.826061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.895 [2024-11-20 13:50:58.826078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:46.895 [2024-11-20 13:50:58.826089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:31:46.895 [2024-11-20 13:50:58.826106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.895 [2024-11-20 13:50:58.826162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.895 [2024-11-20 13:50:58.826177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:46.895 [2024-11-20 13:50:58.826187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:46.895 [2024-11-20 13:50:58.826225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.895 [2024-11-20 13:50:58.826262] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:46.895 [2024-11-20 13:50:58.831483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.895 [2024-11-20 13:50:58.831517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:46.895 [2024-11-20 13:50:58.831535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.236 ms 00:31:46.895 [2024-11-20 13:50:58.831546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.895 [2024-11-20 13:50:58.831586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.895 [2024-11-20 13:50:58.831607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:46.895 [2024-11-20 13:50:58.831622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:46.895 [2024-11-20 13:50:58.831633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.895 [2024-11-20 13:50:58.831681] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:46.895 [2024-11-20 13:50:58.831817] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:46.895 [2024-11-20 13:50:58.831838] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:46.895 [2024-11-20 13:50:58.831852] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:46.895 [2024-11-20 13:50:58.831868] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:46.895 [2024-11-20 13:50:58.831881] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:46.895 [2024-11-20 13:50:58.831894] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:46.895 [2024-11-20 13:50:58.831905] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:46.895 [2024-11-20 13:50:58.831920] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:46.895 [2024-11-20 13:50:58.831931] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:46.895 [2024-11-20 13:50:58.831943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.895 [2024-11-20 13:50:58.831954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:46.895 [2024-11-20 13:50:58.831967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:31:46.895 [2024-11-20 13:50:58.831988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.895 [2024-11-20 13:50:58.832067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.895 [2024-11-20 13:50:58.832078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:46.895 [2024-11-20 13:50:58.832091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:31:46.895 [2024-11-20 13:50:58.832101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.895 [2024-11-20 13:50:58.832201] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:46.895 [2024-11-20 13:50:58.832215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:46.895 [2024-11-20 13:50:58.832229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:46.895 [2024-11-20 13:50:58.832239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:46.895 [2024-11-20 13:50:58.832252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:46.895 [2024-11-20 13:50:58.832262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:46.895 [2024-11-20 13:50:58.832274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:46.895 [2024-11-20 13:50:58.832283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:46.895 [2024-11-20 13:50:58.832295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:46.895 [2024-11-20 13:50:58.832304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:46.895 [2024-11-20 13:50:58.832316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:46.895 [2024-11-20 13:50:58.832326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:46.895 [2024-11-20 13:50:58.832338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:46.895 [2024-11-20 13:50:58.832347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:46.895 [2024-11-20 13:50:58.832361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:46.895 [2024-11-20 13:50:58.832371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:46.895 [2024-11-20 13:50:58.832385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:46.895 [2024-11-20 13:50:58.832394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:46.895 [2024-11-20 13:50:58.832408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:46.895 [2024-11-20 13:50:58.832417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:46.895 [2024-11-20 13:50:58.832429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:46.895 [2024-11-20 13:50:58.832438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:46.895 [2024-11-20 13:50:58.832450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:46.895 [2024-11-20 13:50:58.832460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:46.895 [2024-11-20 13:50:58.832471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:46.895 [2024-11-20 13:50:58.832480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:46.895 [2024-11-20 13:50:58.832492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:46.895 [2024-11-20 13:50:58.832501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:46.895 [2024-11-20 13:50:58.832513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:46.895 [2024-11-20 13:50:58.832522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:46.895 [2024-11-20 13:50:58.832534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:46.895 [2024-11-20 13:50:58.832543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:46.895 [2024-11-20 13:50:58.832556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:46.895 [2024-11-20 13:50:58.832565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:46.895 [2024-11-20 13:50:58.832577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:46.895 [2024-11-20 13:50:58.832586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:46.895 [2024-11-20 13:50:58.832607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:46.895 [2024-11-20 13:50:58.832617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:46.895 [2024-11-20 13:50:58.832629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:46.895 [2024-11-20 13:50:58.832638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:46.895 [2024-11-20 13:50:58.832649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:46.895 [2024-11-20 13:50:58.832659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:46.895 [2024-11-20 13:50:58.832670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:46.895 [2024-11-20 13:50:58.832680] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:46.895 [2024-11-20 13:50:58.832692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:46.895 [2024-11-20 13:50:58.832702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:46.895 [2024-11-20 13:50:58.832717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:46.895 [2024-11-20 13:50:58.832728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:46.895 [2024-11-20 13:50:58.832743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:46.895 [2024-11-20 13:50:58.832752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:46.895 [2024-11-20 13:50:58.832765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:46.895 [2024-11-20 13:50:58.832774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:46.895 [2024-11-20 13:50:58.832786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:46.895 [2024-11-20 13:50:58.832801] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:46.895 [2024-11-20 13:50:58.832817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:46.895 [2024-11-20 13:50:58.832832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:46.895 [2024-11-20 13:50:58.832844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:46.895 [2024-11-20 13:50:58.832855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:46.895 [2024-11-20 13:50:58.832868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:46.895 [2024-11-20 13:50:58.832879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:46.896 [2024-11-20 13:50:58.832892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:46.896 [2024-11-20 13:50:58.832903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:46.896 [2024-11-20 13:50:58.832916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:46.896 [2024-11-20 13:50:58.832927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:46.896 [2024-11-20 13:50:58.832941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:46.896 [2024-11-20 13:50:58.832952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:46.896 [2024-11-20 13:50:58.832964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:46.896 [2024-11-20 13:50:58.832974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:46.896 [2024-11-20 13:50:58.832989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:46.896 [2024-11-20 13:50:58.832999] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:46.896 [2024-11-20 13:50:58.833013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:46.896 [2024-11-20 13:50:58.833024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:46.896 [2024-11-20 13:50:58.833037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:46.896 [2024-11-20 13:50:58.833047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:46.896 [2024-11-20 13:50:58.833060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:46.896 [2024-11-20 13:50:58.833071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:46.896 [2024-11-20 13:50:58.833085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:46.896 [2024-11-20 13:50:58.833095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:31:46.896 [2024-11-20 13:50:58.833108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:46.896 [2024-11-20 13:50:58.833151] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:46.896 [2024-11-20 13:50:58.833170] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:50.268 [2024-11-20 13:51:02.105434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.268 [2024-11-20 13:51:02.105505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:50.268 [2024-11-20 13:51:02.105525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3277.593 ms 00:31:50.268 [2024-11-20 13:51:02.105541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.268 [2024-11-20 13:51:02.146049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.268 [2024-11-20 13:51:02.146111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:50.268 [2024-11-20 13:51:02.146129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.239 ms 00:31:50.268 [2024-11-20 13:51:02.146143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.268 [2024-11-20 13:51:02.146346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.268 [2024-11-20 13:51:02.146365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:50.268 [2024-11-20 13:51:02.146377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:31:50.268 [2024-11-20 13:51:02.146413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.268 [2024-11-20 13:51:02.193866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.268 [2024-11-20 13:51:02.193922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:50.268 [2024-11-20 13:51:02.193938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.478 ms 00:31:50.268 [2024-11-20 13:51:02.193954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.268 [2024-11-20 13:51:02.194009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.268 [2024-11-20 13:51:02.194026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:50.268 [2024-11-20 13:51:02.194038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:50.268 [2024-11-20 13:51:02.194051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.268 [2024-11-20 13:51:02.194561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.268 [2024-11-20 13:51:02.194582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:50.268 [2024-11-20 13:51:02.194594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:31:50.268 [2024-11-20 13:51:02.194623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.268 [2024-11-20 13:51:02.194731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.268 [2024-11-20 13:51:02.194746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:50.268 [2024-11-20 13:51:02.194759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:31:50.268 [2024-11-20 13:51:02.194775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.268 [2024-11-20 13:51:02.217166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.268 [2024-11-20 13:51:02.217223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:50.268 [2024-11-20 13:51:02.217239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.405 ms 00:31:50.268 [2024-11-20 13:51:02.217253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.528 [2024-11-20 13:51:02.245633] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:50.528 [2024-11-20 13:51:02.249173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.528 [2024-11-20 13:51:02.249209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:50.528 [2024-11-20 13:51:02.249230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.825 ms 00:31:50.528 [2024-11-20 13:51:02.249240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.528 [2024-11-20 13:51:02.339446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.528 [2024-11-20 13:51:02.339527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:50.528 [2024-11-20 13:51:02.339549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.291 ms 00:31:50.528 [2024-11-20 13:51:02.339561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.528 [2024-11-20 13:51:02.339771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.528 [2024-11-20 13:51:02.339789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:50.528 [2024-11-20 13:51:02.339808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:31:50.528 [2024-11-20 13:51:02.339819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.528 [2024-11-20 13:51:02.378035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.528 [2024-11-20 13:51:02.378089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:50.528 [2024-11-20 13:51:02.378109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.215 ms 00:31:50.528 [2024-11-20 13:51:02.378120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.528 [2024-11-20 13:51:02.413511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.528 [2024-11-20 13:51:02.413552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:50.528 [2024-11-20 13:51:02.413571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.391 ms 00:31:50.528 [2024-11-20 13:51:02.413581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.528 [2024-11-20 13:51:02.414391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.528 [2024-11-20 13:51:02.414415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:50.528 [2024-11-20 13:51:02.414430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:31:50.528 [2024-11-20 13:51:02.414445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.788 [2024-11-20 13:51:02.515251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.788 [2024-11-20 13:51:02.515306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:50.788 [2024-11-20 13:51:02.515330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.900 ms 00:31:50.788 [2024-11-20 13:51:02.515341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.788 [2024-11-20 13:51:02.554859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.788 [2024-11-20 13:51:02.554911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:50.788 [2024-11-20 13:51:02.554932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.477 ms 00:31:50.788 [2024-11-20 13:51:02.554943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.788 [2024-11-20 13:51:02.592880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.788 [2024-11-20 13:51:02.592932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:50.788 [2024-11-20 13:51:02.592951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.943 ms 00:31:50.788 [2024-11-20 13:51:02.592962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.788 [2024-11-20 13:51:02.630635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.788 [2024-11-20 13:51:02.630680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:50.788 [2024-11-20 13:51:02.630700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.679 ms 00:31:50.788 [2024-11-20 13:51:02.630711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.788 [2024-11-20 13:51:02.630763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.788 [2024-11-20 13:51:02.630776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:50.788 [2024-11-20 13:51:02.630794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:50.788 [2024-11-20 13:51:02.630805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.788 [2024-11-20 13:51:02.630935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.788 [2024-11-20 13:51:02.630948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:50.788 [2024-11-20 13:51:02.630965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:31:50.788 [2024-11-20 13:51:02.630975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.788 [2024-11-20 13:51:02.632065] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3822.183 ms, result 0 00:31:50.788 { 00:31:50.788 "name": "ftl0", 00:31:50.788 "uuid": "da2d074f-2de8-4281-ab42-5ee8adeba823" 00:31:50.788 } 00:31:50.788 13:51:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:31:50.788 13:51:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:51.046 13:51:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:31:51.046 13:51:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:31:51.046 13:51:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:31:51.304 /dev/nbd0 00:31:51.304 13:51:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:31:51.304 13:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:51.304 13:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:31:51.304 13:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:51.304 13:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:51.304 13:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:51.304 13:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:31:51.304 13:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:51.304 13:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:51.304 13:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:31:51.304 1+0 records in 00:31:51.304 1+0 records out 00:31:51.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508893 s, 8.0 MB/s 00:31:51.304 13:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:31:51.304 13:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:31:51.304 13:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:31:51.304 13:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:51.304 13:51:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:31:51.304 13:51:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:31:51.304 [2024-11-20 13:51:03.226445] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:31:51.304 [2024-11-20 13:51:03.226592] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81409 ] 00:31:51.563 [2024-11-20 13:51:03.407769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.822 [2024-11-20 13:51:03.532368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:53.200  [2024-11-20T13:51:06.094Z] Copying: 195/1024 [MB] (195 MBps) [2024-11-20T13:51:07.030Z] Copying: 395/1024 [MB] (200 MBps) [2024-11-20T13:51:07.977Z] Copying: 594/1024 [MB] (198 MBps) [2024-11-20T13:51:08.912Z] Copying: 777/1024 [MB] (183 MBps) [2024-11-20T13:51:09.479Z] Copying: 958/1024 [MB] (180 MBps) [2024-11-20T13:51:10.415Z] Copying: 1024/1024 [MB] (average 191 MBps) 00:31:58.458 00:31:58.717 13:51:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:00.621 13:51:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:32:00.621 [2024-11-20 13:51:12.227216] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:32:00.621 [2024-11-20 13:51:12.227595] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81502 ] 00:32:00.621 [2024-11-20 13:51:12.408790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.621 [2024-11-20 13:51:12.527633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.998  [2024-11-20T13:51:14.891Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-20T13:51:16.269Z] Copying: 35/1024 [MB] (17 MBps) [2024-11-20T13:51:17.207Z] Copying: 53/1024 [MB] (17 MBps) [2024-11-20T13:51:18.143Z] Copying: 71/1024 [MB] (17 MBps) [2024-11-20T13:51:19.082Z] Copying: 88/1024 [MB] (17 MBps) [2024-11-20T13:51:20.041Z] Copying: 107/1024 [MB] (18 MBps) [2024-11-20T13:51:21.007Z] Copying: 125/1024 [MB] (18 MBps) [2024-11-20T13:51:21.942Z] Copying: 143/1024 [MB] (18 MBps) [2024-11-20T13:51:22.880Z] Copying: 162/1024 [MB] (18 MBps) [2024-11-20T13:51:24.257Z] Copying: 180/1024 [MB] (18 MBps) [2024-11-20T13:51:25.192Z] Copying: 199/1024 [MB] (18 MBps) [2024-11-20T13:51:26.127Z] Copying: 219/1024 [MB] (19 MBps) [2024-11-20T13:51:27.109Z] Copying: 238/1024 [MB] (19 MBps) [2024-11-20T13:51:28.045Z] Copying: 256/1024 [MB] (18 MBps) [2024-11-20T13:51:28.981Z] Copying: 275/1024 [MB] (18 MBps) [2024-11-20T13:51:29.917Z] Copying: 293/1024 [MB] (18 MBps) [2024-11-20T13:51:30.853Z] Copying: 312/1024 [MB] (18 MBps) [2024-11-20T13:51:32.228Z] Copying: 330/1024 [MB] (18 MBps) [2024-11-20T13:51:33.163Z] Copying: 348/1024 [MB] (18 MBps) [2024-11-20T13:51:34.099Z] Copying: 368/1024 [MB] (19 MBps) [2024-11-20T13:51:35.033Z] Copying: 387/1024 [MB] (19 MBps) [2024-11-20T13:51:35.966Z] Copying: 408/1024 [MB] (20 MBps) [2024-11-20T13:51:36.902Z] Copying: 427/1024 [MB] (19 MBps) [2024-11-20T13:51:37.839Z] Copying: 447/1024 [MB] (19 MBps) [2024-11-20T13:51:39.215Z] Copying: 465/1024 [MB] (18 MBps) [2024-11-20T13:51:40.149Z] Copying: 483/1024 [MB] (18 MBps) [2024-11-20T13:51:41.084Z] Copying: 502/1024 [MB] (18 MBps) [2024-11-20T13:51:42.021Z] Copying: 519/1024 [MB] (17 MBps) [2024-11-20T13:51:42.956Z] Copying: 537/1024 [MB] (17 MBps) [2024-11-20T13:51:43.896Z] Copying: 555/1024 [MB] (18 MBps) [2024-11-20T13:51:44.832Z] Copying: 573/1024 [MB] (17 MBps) [2024-11-20T13:51:46.210Z] Copying: 590/1024 [MB] (17 MBps) [2024-11-20T13:51:47.149Z] Copying: 606/1024 [MB] (16 MBps) [2024-11-20T13:51:48.084Z] Copying: 624/1024 [MB] (17 MBps) [2024-11-20T13:51:49.018Z] Copying: 641/1024 [MB] (17 MBps) [2024-11-20T13:51:49.953Z] Copying: 661/1024 [MB] (19 MBps) [2024-11-20T13:51:50.889Z] Copying: 680/1024 [MB] (18 MBps) [2024-11-20T13:51:51.827Z] Copying: 698/1024 [MB] (18 MBps) [2024-11-20T13:51:53.227Z] Copying: 716/1024 [MB] (18 MBps) [2024-11-20T13:51:53.796Z] Copying: 734/1024 [MB] (17 MBps) [2024-11-20T13:51:55.174Z] Copying: 752/1024 [MB] (18 MBps) [2024-11-20T13:51:56.111Z] Copying: 770/1024 [MB] (17 MBps) [2024-11-20T13:51:57.050Z] Copying: 787/1024 [MB] (16 MBps) [2024-11-20T13:51:58.015Z] Copying: 804/1024 [MB] (17 MBps) [2024-11-20T13:51:58.959Z] Copying: 823/1024 [MB] (18 MBps) [2024-11-20T13:51:59.896Z] Copying: 841/1024 [MB] (17 MBps) [2024-11-20T13:52:00.832Z] Copying: 859/1024 [MB] (18 MBps) [2024-11-20T13:52:02.207Z] Copying: 877/1024 [MB] (18 MBps) [2024-11-20T13:52:03.145Z] Copying: 895/1024 [MB] (17 MBps) [2024-11-20T13:52:04.080Z] Copying: 913/1024 [MB] (18 MBps) [2024-11-20T13:52:05.017Z] Copying: 932/1024 [MB] (18 MBps) [2024-11-20T13:52:05.954Z] Copying: 950/1024 [MB] (18 MBps) [2024-11-20T13:52:06.891Z] Copying: 968/1024 [MB] (18 MBps) [2024-11-20T13:52:07.829Z] Copying: 986/1024 [MB] (18 MBps) [2024-11-20T13:52:09.209Z] Copying: 1004/1024 [MB] (17 MBps) [2024-11-20T13:52:09.209Z] Copying: 1021/1024 [MB] (16 MBps) [2024-11-20T13:52:10.143Z] Copying: 1024/1024 [MB] (average 18 MBps) 00:32:58.186 00:32:58.186 13:52:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:32:58.186 13:52:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:32:58.446 13:52:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:32:58.705 [2024-11-20 13:52:10.502307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.705 [2024-11-20 13:52:10.502378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:58.705 [2024-11-20 13:52:10.502397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:58.705 [2024-11-20 13:52:10.502411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.705 [2024-11-20 13:52:10.502441] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:58.705 [2024-11-20 13:52:10.506585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.705 [2024-11-20 13:52:10.506626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:58.705 [2024-11-20 13:52:10.506643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.125 ms 00:32:58.705 [2024-11-20 13:52:10.506654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.705 [2024-11-20 13:52:10.508750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.705 [2024-11-20 13:52:10.508791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:58.705 [2024-11-20 13:52:10.508808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.059 ms 00:32:58.705 [2024-11-20 13:52:10.508818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.705 [2024-11-20 13:52:10.527367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.705 [2024-11-20 13:52:10.527418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:58.705 [2024-11-20 13:52:10.527437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.550 ms 00:32:58.705 [2024-11-20 13:52:10.527448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.705 [2024-11-20 13:52:10.532647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.705 [2024-11-20 13:52:10.532797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:58.705 [2024-11-20 13:52:10.532826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.162 ms 00:32:58.705 [2024-11-20 13:52:10.532837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.705 [2024-11-20 13:52:10.569882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.705 [2024-11-20 13:52:10.569929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:58.705 [2024-11-20 13:52:10.569949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.006 ms 00:32:58.705 [2024-11-20 13:52:10.569960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.705 [2024-11-20 13:52:10.592910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.705 [2024-11-20 13:52:10.592972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:58.705 [2024-11-20 13:52:10.592995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.933 ms 00:32:58.705 [2024-11-20 13:52:10.593009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.705 [2024-11-20 13:52:10.593175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.705 [2024-11-20 13:52:10.593190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:58.705 [2024-11-20 13:52:10.593204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:32:58.705 [2024-11-20 13:52:10.593215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.706 [2024-11-20 13:52:10.630281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.706 [2024-11-20 13:52:10.630337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:58.706 [2024-11-20 13:52:10.630357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.099 ms 00:32:58.706 [2024-11-20 13:52:10.630368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.966 [2024-11-20 13:52:10.666865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.966 [2024-11-20 13:52:10.666925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:58.966 [2024-11-20 13:52:10.666944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.500 ms 00:32:58.966 [2024-11-20 13:52:10.666955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.966 [2024-11-20 13:52:10.703485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.966 [2024-11-20 13:52:10.703783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:58.966 [2024-11-20 13:52:10.703817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.530 ms 00:32:58.966 [2024-11-20 13:52:10.703830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.966 [2024-11-20 13:52:10.740076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.966 [2024-11-20 13:52:10.740129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:58.966 [2024-11-20 13:52:10.740149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.167 ms 00:32:58.966 [2024-11-20 13:52:10.740160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.966 [2024-11-20 13:52:10.740213] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:58.966 [2024-11-20 13:52:10.740232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:58.966 [2024-11-20 13:52:10.740539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.740996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:58.967 [2024-11-20 13:52:10.741530] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:58.967 [2024-11-20 13:52:10.741544] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: da2d074f-2de8-4281-ab42-5ee8adeba823 00:32:58.967 [2024-11-20 13:52:10.741555] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:58.967 [2024-11-20 13:52:10.741571] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:58.967 [2024-11-20 13:52:10.741581] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:58.967 [2024-11-20 13:52:10.741611] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:58.967 [2024-11-20 13:52:10.741622] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:58.967 [2024-11-20 13:52:10.741636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:58.967 [2024-11-20 13:52:10.741646] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:58.967 [2024-11-20 13:52:10.741658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:58.967 [2024-11-20 13:52:10.741667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:58.967 [2024-11-20 13:52:10.741679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.967 [2024-11-20 13:52:10.741689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:58.967 [2024-11-20 13:52:10.741703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.471 ms 00:32:58.967 [2024-11-20 13:52:10.741713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.967 [2024-11-20 13:52:10.761677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.967 [2024-11-20 13:52:10.761860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:58.967 [2024-11-20 13:52:10.761888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.931 ms 00:32:58.968 [2024-11-20 13:52:10.761899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.968 [2024-11-20 13:52:10.762519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:58.968 [2024-11-20 13:52:10.762536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:58.968 [2024-11-20 13:52:10.762550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:32:58.968 [2024-11-20 13:52:10.762561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.968 [2024-11-20 13:52:10.828953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:58.968 [2024-11-20 13:52:10.829152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:58.968 [2024-11-20 13:52:10.829183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:58.968 [2024-11-20 13:52:10.829194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.968 [2024-11-20 13:52:10.829299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:58.968 [2024-11-20 13:52:10.829314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:58.968 [2024-11-20 13:52:10.829327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:58.968 [2024-11-20 13:52:10.829338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.968 [2024-11-20 13:52:10.829486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:58.968 [2024-11-20 13:52:10.829504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:58.968 [2024-11-20 13:52:10.829517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:58.968 [2024-11-20 13:52:10.829528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:58.968 [2024-11-20 13:52:10.829555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:58.968 [2024-11-20 13:52:10.829566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:58.968 [2024-11-20 13:52:10.829579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:58.968 [2024-11-20 13:52:10.829589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.227 [2024-11-20 13:52:10.957148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.227 [2024-11-20 13:52:10.957223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:59.227 [2024-11-20 13:52:10.957243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.227 [2024-11-20 13:52:10.957254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.227 [2024-11-20 13:52:11.059279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.227 [2024-11-20 13:52:11.059353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:59.227 [2024-11-20 13:52:11.059372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.227 [2024-11-20 13:52:11.059383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.227 [2024-11-20 13:52:11.059520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.227 [2024-11-20 13:52:11.059534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:59.227 [2024-11-20 13:52:11.059548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.227 [2024-11-20 13:52:11.059561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.227 [2024-11-20 13:52:11.059649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.227 [2024-11-20 13:52:11.059663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:59.227 [2024-11-20 13:52:11.059676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.227 [2024-11-20 13:52:11.059686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.227 [2024-11-20 13:52:11.059832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.227 [2024-11-20 13:52:11.059845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:59.227 [2024-11-20 13:52:11.059859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.227 [2024-11-20 13:52:11.059873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.227 [2024-11-20 13:52:11.059916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.227 [2024-11-20 13:52:11.059928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:59.227 [2024-11-20 13:52:11.059942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.227 [2024-11-20 13:52:11.059952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.227 [2024-11-20 13:52:11.060007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.227 [2024-11-20 13:52:11.060018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:59.227 [2024-11-20 13:52:11.060031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.227 [2024-11-20 13:52:11.060041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.227 [2024-11-20 13:52:11.060096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:59.227 [2024-11-20 13:52:11.060109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:59.227 [2024-11-20 13:52:11.060122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:59.227 [2024-11-20 13:52:11.060132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.227 [2024-11-20 13:52:11.060277] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 558.840 ms, result 0 00:32:59.227 true 00:32:59.227 13:52:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81256 00:32:59.227 13:52:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81256 00:32:59.227 13:52:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:32:59.486 [2024-11-20 13:52:11.195712] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:32:59.486 [2024-11-20 13:52:11.195857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82096 ] 00:32:59.486 [2024-11-20 13:52:11.381195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.744 [2024-11-20 13:52:11.500484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.118  [2024-11-20T13:52:14.010Z] Copying: 200/1024 [MB] (200 MBps) [2024-11-20T13:52:14.948Z] Copying: 403/1024 [MB] (202 MBps) [2024-11-20T13:52:15.887Z] Copying: 607/1024 [MB] (204 MBps) [2024-11-20T13:52:16.824Z] Copying: 808/1024 [MB] (200 MBps) [2024-11-20T13:52:17.082Z] Copying: 1008/1024 [MB] (200 MBps) [2024-11-20T13:52:18.456Z] Copying: 1024/1024 [MB] (average 201 MBps) 00:33:06.499 00:33:06.499 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81256 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:33:06.499 13:52:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:06.499 [2024-11-20 13:52:18.146928] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:33:06.499 [2024-11-20 13:52:18.147061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82171 ] 00:33:06.499 [2024-11-20 13:52:18.329368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.499 [2024-11-20 13:52:18.449414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.065 [2024-11-20 13:52:18.820880] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:07.065 [2024-11-20 13:52:18.820948] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:07.065 [2024-11-20 13:52:18.887385] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:07.065 [2024-11-20 13:52:18.887669] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:07.065 [2024-11-20 13:52:18.887889] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:07.324 [2024-11-20 13:52:19.186186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.324 [2024-11-20 13:52:19.186261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:07.324 [2024-11-20 13:52:19.186278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:07.324 [2024-11-20 13:52:19.186289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.324 [2024-11-20 13:52:19.186343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.324 [2024-11-20 13:52:19.186356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:07.324 [2024-11-20 13:52:19.186366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:33:07.324 [2024-11-20 13:52:19.186376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.324 [2024-11-20 13:52:19.186399] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:07.324 [2024-11-20 13:52:19.187305] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:07.324 [2024-11-20 13:52:19.187333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.324 [2024-11-20 13:52:19.187344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:07.324 [2024-11-20 13:52:19.187356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.940 ms 00:33:07.324 [2024-11-20 13:52:19.187366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.324 [2024-11-20 13:52:19.188962] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:07.324 [2024-11-20 13:52:19.209065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.325 [2024-11-20 13:52:19.209219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:07.325 [2024-11-20 13:52:19.209345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.137 ms 00:33:07.325 [2024-11-20 13:52:19.209385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.325 [2024-11-20 13:52:19.209469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.325 [2024-11-20 13:52:19.209509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:07.325 [2024-11-20 13:52:19.209539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:33:07.325 [2024-11-20 13:52:19.209635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.325 [2024-11-20 13:52:19.216454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.325 [2024-11-20 13:52:19.216589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:07.325 [2024-11-20 13:52:19.216742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.724 ms 00:33:07.325 [2024-11-20 13:52:19.216780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.325 [2024-11-20 13:52:19.216890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.325 [2024-11-20 13:52:19.217033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:07.325 [2024-11-20 13:52:19.217124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:33:07.325 [2024-11-20 13:52:19.217154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.325 [2024-11-20 13:52:19.217223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.325 [2024-11-20 13:52:19.217257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:07.325 [2024-11-20 13:52:19.217289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:07.325 [2024-11-20 13:52:19.217379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.325 [2024-11-20 13:52:19.217438] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:07.325 [2024-11-20 13:52:19.222560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.325 [2024-11-20 13:52:19.222705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:07.325 [2024-11-20 13:52:19.222837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.139 ms 00:33:07.325 [2024-11-20 13:52:19.222853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.325 [2024-11-20 13:52:19.222894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.325 [2024-11-20 13:52:19.222905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:07.325 [2024-11-20 13:52:19.222917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:07.325 [2024-11-20 13:52:19.222927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.325 [2024-11-20 13:52:19.222985] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:07.325 [2024-11-20 13:52:19.223010] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:07.325 [2024-11-20 13:52:19.223047] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:07.325 [2024-11-20 13:52:19.223065] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:07.325 [2024-11-20 13:52:19.223155] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:07.325 [2024-11-20 13:52:19.223168] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:07.325 [2024-11-20 13:52:19.223181] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:07.325 [2024-11-20 13:52:19.223195] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:07.325 [2024-11-20 13:52:19.223211] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:07.325 [2024-11-20 13:52:19.223222] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:07.325 [2024-11-20 13:52:19.223233] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:07.325 [2024-11-20 13:52:19.223243] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:07.325 [2024-11-20 13:52:19.223254] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:07.325 [2024-11-20 13:52:19.223264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.325 [2024-11-20 13:52:19.223274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:07.325 [2024-11-20 13:52:19.223285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:33:07.325 [2024-11-20 13:52:19.223295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.325 [2024-11-20 13:52:19.223369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.325 [2024-11-20 13:52:19.223383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:07.325 [2024-11-20 13:52:19.223394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:33:07.325 [2024-11-20 13:52:19.223404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.325 [2024-11-20 13:52:19.223500] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:07.325 [2024-11-20 13:52:19.223514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:07.325 [2024-11-20 13:52:19.223525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:07.325 [2024-11-20 13:52:19.223535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:07.325 [2024-11-20 13:52:19.223546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:07.325 [2024-11-20 13:52:19.223555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:07.325 [2024-11-20 13:52:19.223565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:07.325 [2024-11-20 13:52:19.223574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:07.325 [2024-11-20 13:52:19.223584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:07.325 [2024-11-20 13:52:19.223594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:07.325 [2024-11-20 13:52:19.223618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:07.325 [2024-11-20 13:52:19.223638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:07.325 [2024-11-20 13:52:19.223649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:07.325 [2024-11-20 13:52:19.223658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:07.325 [2024-11-20 13:52:19.223668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:07.325 [2024-11-20 13:52:19.223677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:07.325 [2024-11-20 13:52:19.223686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:07.325 [2024-11-20 13:52:19.223696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:07.325 [2024-11-20 13:52:19.223705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:07.325 [2024-11-20 13:52:19.223714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:07.325 [2024-11-20 13:52:19.223723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:07.325 [2024-11-20 13:52:19.223732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:07.325 [2024-11-20 13:52:19.223741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:07.325 [2024-11-20 13:52:19.223750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:07.325 [2024-11-20 13:52:19.223759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:07.325 [2024-11-20 13:52:19.223768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:07.325 [2024-11-20 13:52:19.223777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:07.325 [2024-11-20 13:52:19.223786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:07.325 [2024-11-20 13:52:19.223795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:07.325 [2024-11-20 13:52:19.223804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:07.325 [2024-11-20 13:52:19.223812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:07.325 [2024-11-20 13:52:19.223821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:07.325 [2024-11-20 13:52:19.223830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:07.325 [2024-11-20 13:52:19.223839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:07.325 [2024-11-20 13:52:19.223848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:07.325 [2024-11-20 13:52:19.223856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:07.325 [2024-11-20 13:52:19.223865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:07.325 [2024-11-20 13:52:19.223874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:07.325 [2024-11-20 13:52:19.223883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:07.325 [2024-11-20 13:52:19.223892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:07.325 [2024-11-20 13:52:19.223901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:07.325 [2024-11-20 13:52:19.223909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:07.325 [2024-11-20 13:52:19.223918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:07.325 [2024-11-20 13:52:19.223927] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:07.325 [2024-11-20 13:52:19.223939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:07.325 [2024-11-20 13:52:19.223949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:07.325 [2024-11-20 13:52:19.223962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:07.325 [2024-11-20 13:52:19.223973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:07.325 [2024-11-20 13:52:19.223982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:07.325 [2024-11-20 13:52:19.223992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:07.325 [2024-11-20 13:52:19.224002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:07.325 [2024-11-20 13:52:19.224011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:07.325 [2024-11-20 13:52:19.224020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:07.325 [2024-11-20 13:52:19.224030] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:07.326 [2024-11-20 13:52:19.224042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:07.326 [2024-11-20 13:52:19.224053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:07.326 [2024-11-20 13:52:19.224063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:07.326 [2024-11-20 13:52:19.224073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:07.326 [2024-11-20 13:52:19.224084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:07.326 [2024-11-20 13:52:19.224094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:07.326 [2024-11-20 13:52:19.224105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:07.326 [2024-11-20 13:52:19.224115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:07.326 [2024-11-20 13:52:19.224126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:07.326 [2024-11-20 13:52:19.224136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:07.326 [2024-11-20 13:52:19.224147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:07.326 [2024-11-20 13:52:19.224157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:07.326 [2024-11-20 13:52:19.224167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:07.326 [2024-11-20 13:52:19.224178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:07.326 [2024-11-20 13:52:19.224190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:07.326 [2024-11-20 13:52:19.224201] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:07.326 [2024-11-20 13:52:19.224212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:07.326 [2024-11-20 13:52:19.224223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:07.326 [2024-11-20 13:52:19.224234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:07.326 [2024-11-20 13:52:19.224244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:07.326 [2024-11-20 13:52:19.224255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:07.326 [2024-11-20 13:52:19.224265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.326 [2024-11-20 13:52:19.224278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:07.326 [2024-11-20 13:52:19.224289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.822 ms 00:33:07.326 [2024-11-20 13:52:19.224300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.326 [2024-11-20 13:52:19.263438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.326 [2024-11-20 13:52:19.263482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:07.326 [2024-11-20 13:52:19.263497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.153 ms 00:33:07.326 [2024-11-20 13:52:19.263508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.326 [2024-11-20 13:52:19.263610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.326 [2024-11-20 13:52:19.263627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:07.326 [2024-11-20 13:52:19.263638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:33:07.326 [2024-11-20 13:52:19.263649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.585 [2024-11-20 13:52:19.322875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.585 [2024-11-20 13:52:19.322934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:07.585 [2024-11-20 13:52:19.322955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.244 ms 00:33:07.585 [2024-11-20 13:52:19.322965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.585 [2024-11-20 13:52:19.323015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.585 [2024-11-20 13:52:19.323027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:07.585 [2024-11-20 13:52:19.323040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:07.585 [2024-11-20 13:52:19.323050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.585 [2024-11-20 13:52:19.323541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.585 [2024-11-20 13:52:19.323556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:07.585 [2024-11-20 13:52:19.323567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:33:07.585 [2024-11-20 13:52:19.323577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.585 [2024-11-20 13:52:19.323743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.585 [2024-11-20 13:52:19.323758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:07.585 [2024-11-20 13:52:19.323770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:33:07.585 [2024-11-20 13:52:19.323780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.585 [2024-11-20 13:52:19.342907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.585 [2024-11-20 13:52:19.343071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:07.585 [2024-11-20 13:52:19.343095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.135 ms 00:33:07.585 [2024-11-20 13:52:19.343106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.585 [2024-11-20 13:52:19.362566] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:07.585 [2024-11-20 13:52:19.362722] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:07.585 [2024-11-20 13:52:19.362742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.585 [2024-11-20 13:52:19.362754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:07.585 [2024-11-20 13:52:19.362767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.542 ms 00:33:07.585 [2024-11-20 13:52:19.362777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.585 [2024-11-20 13:52:19.392362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.585 [2024-11-20 13:52:19.392404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:07.585 [2024-11-20 13:52:19.392432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.552 ms 00:33:07.585 [2024-11-20 13:52:19.392443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.585 [2024-11-20 13:52:19.411083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.585 [2024-11-20 13:52:19.411121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:07.585 [2024-11-20 13:52:19.411135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.625 ms 00:33:07.585 [2024-11-20 13:52:19.411146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.585 [2024-11-20 13:52:19.429290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.585 [2024-11-20 13:52:19.429328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:07.585 [2024-11-20 13:52:19.429341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.135 ms 00:33:07.585 [2024-11-20 13:52:19.429351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.585 [2024-11-20 13:52:19.430126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.585 [2024-11-20 13:52:19.430157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:07.585 [2024-11-20 13:52:19.430170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:33:07.585 [2024-11-20 13:52:19.430181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.585 [2024-11-20 13:52:19.514683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.585 [2024-11-20 13:52:19.514744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:07.585 [2024-11-20 13:52:19.514763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.608 ms 00:33:07.585 [2024-11-20 13:52:19.514774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.585 [2024-11-20 13:52:19.525725] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:07.585 [2024-11-20 13:52:19.528525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.585 [2024-11-20 13:52:19.528556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:07.585 [2024-11-20 13:52:19.528571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.719 ms 00:33:07.585 [2024-11-20 13:52:19.528582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.585 [2024-11-20 13:52:19.528694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.585 [2024-11-20 13:52:19.528727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:07.585 [2024-11-20 13:52:19.528739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:07.585 [2024-11-20 13:52:19.528750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.585 [2024-11-20 13:52:19.528825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.585 [2024-11-20 13:52:19.528838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:07.585 [2024-11-20 13:52:19.528848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:33:07.585 [2024-11-20 13:52:19.528859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.585 [2024-11-20 13:52:19.528884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.585 [2024-11-20 13:52:19.528899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:07.585 [2024-11-20 13:52:19.528910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:07.585 [2024-11-20 13:52:19.528920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.585 [2024-11-20 13:52:19.528953] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:07.585 [2024-11-20 13:52:19.528965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.585 [2024-11-20 13:52:19.528975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:07.585 [2024-11-20 13:52:19.528986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:07.585 [2024-11-20 13:52:19.528995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.870 [2024-11-20 13:52:19.565049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.870 [2024-11-20 13:52:19.565088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:07.870 [2024-11-20 13:52:19.565104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.084 ms 00:33:07.870 [2024-11-20 13:52:19.565115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.870 [2024-11-20 13:52:19.565192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.870 [2024-11-20 13:52:19.565204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:07.870 [2024-11-20 13:52:19.565216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:33:07.870 [2024-11-20 13:52:19.565225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.870 [2024-11-20 13:52:19.566415] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 380.293 ms, result 0 00:33:08.833  [2024-11-20T13:52:21.726Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-20T13:52:22.664Z] Copying: 53/1024 [MB] (26 MBps) [2024-11-20T13:52:23.600Z] Copying: 80/1024 [MB] (27 MBps) [2024-11-20T13:52:25.004Z] Copying: 110/1024 [MB] (29 MBps) [2024-11-20T13:52:25.575Z] Copying: 141/1024 [MB] (30 MBps) [2024-11-20T13:52:26.954Z] Copying: 173/1024 [MB] (32 MBps) [2024-11-20T13:52:27.891Z] Copying: 207/1024 [MB] (34 MBps) [2024-11-20T13:52:28.832Z] Copying: 241/1024 [MB] (33 MBps) [2024-11-20T13:52:29.787Z] Copying: 272/1024 [MB] (31 MBps) [2024-11-20T13:52:30.734Z] Copying: 303/1024 [MB] (30 MBps) [2024-11-20T13:52:31.673Z] Copying: 333/1024 [MB] (29 MBps) [2024-11-20T13:52:32.611Z] Copying: 365/1024 [MB] (32 MBps) [2024-11-20T13:52:33.990Z] Copying: 397/1024 [MB] (31 MBps) [2024-11-20T13:52:34.558Z] Copying: 426/1024 [MB] (29 MBps) [2024-11-20T13:52:35.934Z] Copying: 454/1024 [MB] (28 MBps) [2024-11-20T13:52:36.873Z] Copying: 482/1024 [MB] (27 MBps) [2024-11-20T13:52:37.808Z] Copying: 510/1024 [MB] (27 MBps) [2024-11-20T13:52:38.743Z] Copying: 536/1024 [MB] (26 MBps) [2024-11-20T13:52:39.702Z] Copying: 563/1024 [MB] (26 MBps) [2024-11-20T13:52:40.657Z] Copying: 589/1024 [MB] (26 MBps) [2024-11-20T13:52:41.594Z] Copying: 615/1024 [MB] (25 MBps) [2024-11-20T13:52:42.598Z] Copying: 640/1024 [MB] (25 MBps) [2024-11-20T13:52:43.973Z] Copying: 670/1024 [MB] (29 MBps) [2024-11-20T13:52:44.541Z] Copying: 700/1024 [MB] (30 MBps) [2024-11-20T13:52:45.920Z] Copying: 728/1024 [MB] (28 MBps) [2024-11-20T13:52:46.857Z] Copying: 755/1024 [MB] (26 MBps) [2024-11-20T13:52:47.793Z] Copying: 783/1024 [MB] (27 MBps) [2024-11-20T13:52:48.732Z] Copying: 809/1024 [MB] (26 MBps) [2024-11-20T13:52:49.682Z] Copying: 835/1024 [MB] (25 MBps) [2024-11-20T13:52:50.618Z] Copying: 861/1024 [MB] (25 MBps) [2024-11-20T13:52:51.556Z] Copying: 888/1024 [MB] (27 MBps) [2024-11-20T13:52:52.935Z] Copying: 914/1024 [MB] (25 MBps) [2024-11-20T13:52:53.871Z] Copying: 940/1024 [MB] (25 MBps) [2024-11-20T13:52:54.809Z] Copying: 965/1024 [MB] (25 MBps) [2024-11-20T13:52:55.746Z] Copying: 991/1024 [MB] (25 MBps) [2024-11-20T13:52:56.721Z] Copying: 1016/1024 [MB] (25 MBps) [2024-11-20T13:52:56.721Z] Copying: 1048548/1048576 [kB] (7512 kBps) [2024-11-20T13:52:56.721Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-20 13:52:56.553547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.764 [2024-11-20 13:52:56.553630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:44.764 [2024-11-20 13:52:56.553670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:44.764 [2024-11-20 13:52:56.553683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.764 [2024-11-20 13:52:56.556683] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:44.764 [2024-11-20 13:52:56.561744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.764 [2024-11-20 13:52:56.561916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:44.764 [2024-11-20 13:52:56.561942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.004 ms 00:33:44.764 [2024-11-20 13:52:56.561953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.764 [2024-11-20 13:52:56.571116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.764 [2024-11-20 13:52:56.571162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:44.764 [2024-11-20 13:52:56.571178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.349 ms 00:33:44.764 [2024-11-20 13:52:56.571189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.764 [2024-11-20 13:52:56.594380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.764 [2024-11-20 13:52:56.594434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:44.764 [2024-11-20 13:52:56.594452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.209 ms 00:33:44.764 [2024-11-20 13:52:56.594465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.764 [2024-11-20 13:52:56.599562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.764 [2024-11-20 13:52:56.599731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:44.764 [2024-11-20 13:52:56.599753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.068 ms 00:33:44.764 [2024-11-20 13:52:56.599773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.764 [2024-11-20 13:52:56.637403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.764 [2024-11-20 13:52:56.637456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:44.764 [2024-11-20 13:52:56.637473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.637 ms 00:33:44.764 [2024-11-20 13:52:56.637484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.764 [2024-11-20 13:52:56.658732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.764 [2024-11-20 13:52:56.659059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:44.764 [2024-11-20 13:52:56.659091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.234 ms 00:33:44.764 [2024-11-20 13:52:56.659103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.025 [2024-11-20 13:52:56.771341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.025 [2024-11-20 13:52:56.771449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:45.025 [2024-11-20 13:52:56.771483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.269 ms 00:33:45.025 [2024-11-20 13:52:56.771494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.025 [2024-11-20 13:52:56.809052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.025 [2024-11-20 13:52:56.809097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:45.025 [2024-11-20 13:52:56.809113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.598 ms 00:33:45.025 [2024-11-20 13:52:56.809124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.025 [2024-11-20 13:52:56.845502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.025 [2024-11-20 13:52:56.845691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:45.025 [2024-11-20 13:52:56.845715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.397 ms 00:33:45.025 [2024-11-20 13:52:56.845726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.025 [2024-11-20 13:52:56.882166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.025 [2024-11-20 13:52:56.882208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:45.025 [2024-11-20 13:52:56.882222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.459 ms 00:33:45.025 [2024-11-20 13:52:56.882248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.025 [2024-11-20 13:52:56.918333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.025 [2024-11-20 13:52:56.918371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:45.025 [2024-11-20 13:52:56.918385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.057 ms 00:33:45.025 [2024-11-20 13:52:56.918395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.026 [2024-11-20 13:52:56.918441] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:45.026 [2024-11-20 13:52:56.918458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 107008 / 261120 wr_cnt: 1 state: open 00:33:45.026 [2024-11-20 13:52:56.918471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.918991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:45.026 [2024-11-20 13:52:56.919288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:45.027 [2024-11-20 13:52:56.919567] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:45.027 [2024-11-20 13:52:56.919582] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: da2d074f-2de8-4281-ab42-5ee8adeba823 00:33:45.027 [2024-11-20 13:52:56.919593] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 107008 00:33:45.027 [2024-11-20 13:52:56.919618] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 107968 00:33:45.027 [2024-11-20 13:52:56.919637] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 107008 00:33:45.027 [2024-11-20 13:52:56.919648] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0090 00:33:45.027 [2024-11-20 13:52:56.919658] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:45.027 [2024-11-20 13:52:56.919668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:45.027 [2024-11-20 13:52:56.919679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:45.027 [2024-11-20 13:52:56.919689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:45.027 [2024-11-20 13:52:56.919697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:45.027 [2024-11-20 13:52:56.919708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.027 [2024-11-20 13:52:56.919718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:45.027 [2024-11-20 13:52:56.919729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.278 ms 00:33:45.027 [2024-11-20 13:52:56.919739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.027 [2024-11-20 13:52:56.939573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.027 [2024-11-20 13:52:56.939629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:45.027 [2024-11-20 13:52:56.939645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.782 ms 00:33:45.027 [2024-11-20 13:52:56.939677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.027 [2024-11-20 13:52:56.940220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.027 [2024-11-20 13:52:56.940236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:45.027 [2024-11-20 13:52:56.940248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:33:45.027 [2024-11-20 13:52:56.940264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.287 [2024-11-20 13:52:56.991751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:45.287 [2024-11-20 13:52:56.991811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:45.287 [2024-11-20 13:52:56.991827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:45.287 [2024-11-20 13:52:56.991838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.287 [2024-11-20 13:52:56.991916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:45.287 [2024-11-20 13:52:56.991927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:45.287 [2024-11-20 13:52:56.991938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:45.287 [2024-11-20 13:52:56.991952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.287 [2024-11-20 13:52:56.992034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:45.287 [2024-11-20 13:52:56.992048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:45.287 [2024-11-20 13:52:56.992060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:45.287 [2024-11-20 13:52:56.992069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.287 [2024-11-20 13:52:56.992095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:45.287 [2024-11-20 13:52:56.992107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:45.287 [2024-11-20 13:52:56.992118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:45.287 [2024-11-20 13:52:56.992128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.287 [2024-11-20 13:52:57.116948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:45.287 [2024-11-20 13:52:57.117022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:45.287 [2024-11-20 13:52:57.117039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:45.287 [2024-11-20 13:52:57.117050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.287 [2024-11-20 13:52:57.218824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:45.287 [2024-11-20 13:52:57.218896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:45.287 [2024-11-20 13:52:57.218912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:45.287 [2024-11-20 13:52:57.218923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.287 [2024-11-20 13:52:57.219035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:45.287 [2024-11-20 13:52:57.219047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:45.287 [2024-11-20 13:52:57.219058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:45.287 [2024-11-20 13:52:57.219068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.287 [2024-11-20 13:52:57.219115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:45.287 [2024-11-20 13:52:57.219128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:45.287 [2024-11-20 13:52:57.219138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:45.287 [2024-11-20 13:52:57.219148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.287 [2024-11-20 13:52:57.219264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:45.287 [2024-11-20 13:52:57.219278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:45.287 [2024-11-20 13:52:57.219289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:45.287 [2024-11-20 13:52:57.219312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.287 [2024-11-20 13:52:57.219351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:45.287 [2024-11-20 13:52:57.219363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:45.287 [2024-11-20 13:52:57.219375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:45.287 [2024-11-20 13:52:57.219385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.287 [2024-11-20 13:52:57.219426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:45.287 [2024-11-20 13:52:57.219442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:45.287 [2024-11-20 13:52:57.219453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:45.287 [2024-11-20 13:52:57.219463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.287 [2024-11-20 13:52:57.219508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:45.287 [2024-11-20 13:52:57.219520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:45.287 [2024-11-20 13:52:57.219531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:45.287 [2024-11-20 13:52:57.219541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.287 [2024-11-20 13:52:57.219701] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 669.419 ms, result 0 00:33:47.196 00:33:47.196 00:33:47.196 13:52:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:33:49.101 13:53:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:49.101 [2024-11-20 13:53:00.905956] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:33:49.101 [2024-11-20 13:53:00.906105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82592 ] 00:33:49.360 [2024-11-20 13:53:01.091974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.360 [2024-11-20 13:53:01.217729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.936 [2024-11-20 13:53:01.583323] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:49.936 [2024-11-20 13:53:01.583620] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:49.936 [2024-11-20 13:53:01.745522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.936 [2024-11-20 13:53:01.745584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:49.936 [2024-11-20 13:53:01.745618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:49.936 [2024-11-20 13:53:01.745630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.936 [2024-11-20 13:53:01.745684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.936 [2024-11-20 13:53:01.745697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:49.936 [2024-11-20 13:53:01.745712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:33:49.936 [2024-11-20 13:53:01.745722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.936 [2024-11-20 13:53:01.745744] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:49.936 [2024-11-20 13:53:01.746655] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:49.936 [2024-11-20 13:53:01.746682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.936 [2024-11-20 13:53:01.746695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:49.936 [2024-11-20 13:53:01.746705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:33:49.936 [2024-11-20 13:53:01.746716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.936 [2024-11-20 13:53:01.748166] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:49.936 [2024-11-20 13:53:01.767624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.936 [2024-11-20 13:53:01.767672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:49.936 [2024-11-20 13:53:01.767690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.489 ms 00:33:49.936 [2024-11-20 13:53:01.767702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.936 [2024-11-20 13:53:01.767786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.936 [2024-11-20 13:53:01.767799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:49.936 [2024-11-20 13:53:01.767811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:33:49.936 [2024-11-20 13:53:01.767821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.936 [2024-11-20 13:53:01.775053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.936 [2024-11-20 13:53:01.775234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:49.936 [2024-11-20 13:53:01.775256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.162 ms 00:33:49.936 [2024-11-20 13:53:01.775273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.936 [2024-11-20 13:53:01.775365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.936 [2024-11-20 13:53:01.775378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:49.936 [2024-11-20 13:53:01.775390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:33:49.936 [2024-11-20 13:53:01.775400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.936 [2024-11-20 13:53:01.775448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.936 [2024-11-20 13:53:01.775460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:49.936 [2024-11-20 13:53:01.775471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:49.936 [2024-11-20 13:53:01.775481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.936 [2024-11-20 13:53:01.775515] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:49.936 [2024-11-20 13:53:01.780387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.936 [2024-11-20 13:53:01.780420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:49.936 [2024-11-20 13:53:01.780433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.892 ms 00:33:49.936 [2024-11-20 13:53:01.780448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.936 [2024-11-20 13:53:01.780480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.936 [2024-11-20 13:53:01.780492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:49.936 [2024-11-20 13:53:01.780503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:49.936 [2024-11-20 13:53:01.780512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.936 [2024-11-20 13:53:01.780567] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:49.936 [2024-11-20 13:53:01.780593] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:49.936 [2024-11-20 13:53:01.780646] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:49.936 [2024-11-20 13:53:01.780668] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:49.936 [2024-11-20 13:53:01.780758] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:49.936 [2024-11-20 13:53:01.780772] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:49.936 [2024-11-20 13:53:01.780785] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:49.936 [2024-11-20 13:53:01.780799] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:49.936 [2024-11-20 13:53:01.780813] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:49.936 [2024-11-20 13:53:01.780825] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:49.936 [2024-11-20 13:53:01.780835] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:49.936 [2024-11-20 13:53:01.780845] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:49.936 [2024-11-20 13:53:01.780859] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:49.936 [2024-11-20 13:53:01.780871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.936 [2024-11-20 13:53:01.780881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:49.936 [2024-11-20 13:53:01.780892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:33:49.936 [2024-11-20 13:53:01.780902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.936 [2024-11-20 13:53:01.780975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.936 [2024-11-20 13:53:01.780986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:49.936 [2024-11-20 13:53:01.780997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:33:49.936 [2024-11-20 13:53:01.781007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.936 [2024-11-20 13:53:01.781109] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:49.936 [2024-11-20 13:53:01.781124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:49.936 [2024-11-20 13:53:01.781136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:49.936 [2024-11-20 13:53:01.781146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:49.936 [2024-11-20 13:53:01.781157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:49.936 [2024-11-20 13:53:01.781167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:49.936 [2024-11-20 13:53:01.781177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:49.936 [2024-11-20 13:53:01.781187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:49.936 [2024-11-20 13:53:01.781198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:49.936 [2024-11-20 13:53:01.781207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:49.936 [2024-11-20 13:53:01.781218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:49.936 [2024-11-20 13:53:01.781230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:49.936 [2024-11-20 13:53:01.781240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:49.936 [2024-11-20 13:53:01.781249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:49.936 [2024-11-20 13:53:01.781259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:49.936 [2024-11-20 13:53:01.781278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:49.937 [2024-11-20 13:53:01.781288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:49.937 [2024-11-20 13:53:01.781298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:49.937 [2024-11-20 13:53:01.781308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:49.937 [2024-11-20 13:53:01.781318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:49.937 [2024-11-20 13:53:01.781327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:49.937 [2024-11-20 13:53:01.781337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:49.937 [2024-11-20 13:53:01.781346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:49.937 [2024-11-20 13:53:01.781355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:49.937 [2024-11-20 13:53:01.781365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:49.937 [2024-11-20 13:53:01.781374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:49.937 [2024-11-20 13:53:01.781384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:49.937 [2024-11-20 13:53:01.781393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:49.937 [2024-11-20 13:53:01.781403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:49.937 [2024-11-20 13:53:01.781413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:49.937 [2024-11-20 13:53:01.781421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:49.937 [2024-11-20 13:53:01.781431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:49.937 [2024-11-20 13:53:01.781440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:49.937 [2024-11-20 13:53:01.781449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:49.937 [2024-11-20 13:53:01.781458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:49.937 [2024-11-20 13:53:01.781467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:49.937 [2024-11-20 13:53:01.781476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:49.937 [2024-11-20 13:53:01.781485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:49.937 [2024-11-20 13:53:01.781494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:49.937 [2024-11-20 13:53:01.781504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:49.937 [2024-11-20 13:53:01.781513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:49.937 [2024-11-20 13:53:01.781522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:49.937 [2024-11-20 13:53:01.781533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:49.937 [2024-11-20 13:53:01.781542] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:49.937 [2024-11-20 13:53:01.781553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:49.937 [2024-11-20 13:53:01.781564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:49.937 [2024-11-20 13:53:01.781574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:49.937 [2024-11-20 13:53:01.781584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:49.937 [2024-11-20 13:53:01.781594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:49.937 [2024-11-20 13:53:01.781615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:49.937 [2024-11-20 13:53:01.781625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:49.937 [2024-11-20 13:53:01.781634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:49.937 [2024-11-20 13:53:01.781644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:49.937 [2024-11-20 13:53:01.781655] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:49.937 [2024-11-20 13:53:01.781668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:49.937 [2024-11-20 13:53:01.781680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:49.937 [2024-11-20 13:53:01.781691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:49.937 [2024-11-20 13:53:01.781701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:49.937 [2024-11-20 13:53:01.781711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:49.937 [2024-11-20 13:53:01.781722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:49.937 [2024-11-20 13:53:01.781732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:49.937 [2024-11-20 13:53:01.781743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:49.937 [2024-11-20 13:53:01.781753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:49.937 [2024-11-20 13:53:01.781763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:49.937 [2024-11-20 13:53:01.781774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:49.937 [2024-11-20 13:53:01.781784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:49.937 [2024-11-20 13:53:01.781795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:49.937 [2024-11-20 13:53:01.781805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:49.937 [2024-11-20 13:53:01.781816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:49.937 [2024-11-20 13:53:01.781827] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:49.937 [2024-11-20 13:53:01.781842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:49.937 [2024-11-20 13:53:01.781853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:49.937 [2024-11-20 13:53:01.781864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:49.937 [2024-11-20 13:53:01.781874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:49.937 [2024-11-20 13:53:01.781884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:49.937 [2024-11-20 13:53:01.781896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.937 [2024-11-20 13:53:01.781907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:49.937 [2024-11-20 13:53:01.781917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:33:49.937 [2024-11-20 13:53:01.781928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.937 [2024-11-20 13:53:01.821052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.937 [2024-11-20 13:53:01.821103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:49.937 [2024-11-20 13:53:01.821121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.136 ms 00:33:49.937 [2024-11-20 13:53:01.821132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.937 [2024-11-20 13:53:01.821238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.937 [2024-11-20 13:53:01.821251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:49.937 [2024-11-20 13:53:01.821262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:33:49.937 [2024-11-20 13:53:01.821272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.937 [2024-11-20 13:53:01.886661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.937 [2024-11-20 13:53:01.886716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:49.937 [2024-11-20 13:53:01.886735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.412 ms 00:33:49.937 [2024-11-20 13:53:01.886746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.937 [2024-11-20 13:53:01.886811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.937 [2024-11-20 13:53:01.886823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:49.937 [2024-11-20 13:53:01.886840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:49.937 [2024-11-20 13:53:01.886850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.937 [2024-11-20 13:53:01.887379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.937 [2024-11-20 13:53:01.887394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:49.937 [2024-11-20 13:53:01.887405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:33:49.937 [2024-11-20 13:53:01.887416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.937 [2024-11-20 13:53:01.887539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.937 [2024-11-20 13:53:01.887553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:49.937 [2024-11-20 13:53:01.887565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:33:49.937 [2024-11-20 13:53:01.887581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.198 [2024-11-20 13:53:01.907217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.198 [2024-11-20 13:53:01.907264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:50.198 [2024-11-20 13:53:01.907284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.644 ms 00:33:50.198 [2024-11-20 13:53:01.907295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.198 [2024-11-20 13:53:01.926623] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:50.198 [2024-11-20 13:53:01.926668] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:50.198 [2024-11-20 13:53:01.926687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.198 [2024-11-20 13:53:01.926698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:50.198 [2024-11-20 13:53:01.926712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.277 ms 00:33:50.198 [2024-11-20 13:53:01.926723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.198 [2024-11-20 13:53:01.956832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.198 [2024-11-20 13:53:01.956888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:50.198 [2024-11-20 13:53:01.956906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.105 ms 00:33:50.198 [2024-11-20 13:53:01.956916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.198 [2024-11-20 13:53:01.975579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.198 [2024-11-20 13:53:01.975642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:50.198 [2024-11-20 13:53:01.975660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.636 ms 00:33:50.198 [2024-11-20 13:53:01.975670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.198 [2024-11-20 13:53:01.994246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.198 [2024-11-20 13:53:01.994428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:50.198 [2024-11-20 13:53:01.994453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.561 ms 00:33:50.198 [2024-11-20 13:53:01.994465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.198 [2024-11-20 13:53:01.995426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.198 [2024-11-20 13:53:01.995456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:50.198 [2024-11-20 13:53:01.995470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:33:50.198 [2024-11-20 13:53:01.995487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.198 [2024-11-20 13:53:02.082745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.198 [2024-11-20 13:53:02.082832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:50.198 [2024-11-20 13:53:02.082858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.372 ms 00:33:50.198 [2024-11-20 13:53:02.082869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.198 [2024-11-20 13:53:02.095025] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:50.198 [2024-11-20 13:53:02.098403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.198 [2024-11-20 13:53:02.098443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:50.198 [2024-11-20 13:53:02.098461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.478 ms 00:33:50.198 [2024-11-20 13:53:02.098473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.198 [2024-11-20 13:53:02.098605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.198 [2024-11-20 13:53:02.098639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:50.198 [2024-11-20 13:53:02.098651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:50.198 [2024-11-20 13:53:02.098683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.198 [2024-11-20 13:53:02.100275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.198 [2024-11-20 13:53:02.100318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:50.198 [2024-11-20 13:53:02.100344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.526 ms 00:33:50.198 [2024-11-20 13:53:02.100355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.198 [2024-11-20 13:53:02.100397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.198 [2024-11-20 13:53:02.100409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:50.198 [2024-11-20 13:53:02.100420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:50.198 [2024-11-20 13:53:02.100446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.198 [2024-11-20 13:53:02.100491] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:50.198 [2024-11-20 13:53:02.100506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.198 [2024-11-20 13:53:02.100529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:50.198 [2024-11-20 13:53:02.100539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:33:50.198 [2024-11-20 13:53:02.100550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.198 [2024-11-20 13:53:02.138223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.198 [2024-11-20 13:53:02.138459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:50.198 [2024-11-20 13:53:02.138556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.710 ms 00:33:50.198 [2024-11-20 13:53:02.138638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.198 [2024-11-20 13:53:02.138830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.198 [2024-11-20 13:53:02.138883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:50.198 [2024-11-20 13:53:02.138961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:33:50.198 [2024-11-20 13:53:02.139056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.198 [2024-11-20 13:53:02.142533] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 396.181 ms, result 0 00:33:51.576  [2024-11-20T13:53:04.469Z] Copying: 1096/1048576 [kB] (1096 kBps) [2024-11-20T13:53:05.405Z] Copying: 7136/1048576 [kB] (6040 kBps) [2024-11-20T13:53:06.786Z] Copying: 43/1024 [MB] (36 MBps) [2024-11-20T13:53:07.732Z] Copying: 79/1024 [MB] (35 MBps) [2024-11-20T13:53:08.670Z] Copying: 114/1024 [MB] (35 MBps) [2024-11-20T13:53:09.608Z] Copying: 151/1024 [MB] (36 MBps) [2024-11-20T13:53:10.545Z] Copying: 186/1024 [MB] (34 MBps) [2024-11-20T13:53:11.482Z] Copying: 221/1024 [MB] (35 MBps) [2024-11-20T13:53:12.419Z] Copying: 256/1024 [MB] (34 MBps) [2024-11-20T13:53:13.795Z] Copying: 292/1024 [MB] (35 MBps) [2024-11-20T13:53:14.394Z] Copying: 328/1024 [MB] (36 MBps) [2024-11-20T13:53:15.771Z] Copying: 364/1024 [MB] (35 MBps) [2024-11-20T13:53:16.711Z] Copying: 398/1024 [MB] (34 MBps) [2024-11-20T13:53:17.648Z] Copying: 434/1024 [MB] (35 MBps) [2024-11-20T13:53:18.587Z] Copying: 468/1024 [MB] (34 MBps) [2024-11-20T13:53:19.523Z] Copying: 503/1024 [MB] (34 MBps) [2024-11-20T13:53:20.460Z] Copying: 539/1024 [MB] (35 MBps) [2024-11-20T13:53:21.399Z] Copying: 573/1024 [MB] (34 MBps) [2024-11-20T13:53:22.776Z] Copying: 609/1024 [MB] (35 MBps) [2024-11-20T13:53:23.344Z] Copying: 643/1024 [MB] (34 MBps) [2024-11-20T13:53:24.720Z] Copying: 678/1024 [MB] (34 MBps) [2024-11-20T13:53:25.656Z] Copying: 711/1024 [MB] (33 MBps) [2024-11-20T13:53:26.596Z] Copying: 746/1024 [MB] (34 MBps) [2024-11-20T13:53:27.531Z] Copying: 781/1024 [MB] (34 MBps) [2024-11-20T13:53:28.469Z] Copying: 816/1024 [MB] (34 MBps) [2024-11-20T13:53:29.405Z] Copying: 851/1024 [MB] (34 MBps) [2024-11-20T13:53:30.341Z] Copying: 886/1024 [MB] (35 MBps) [2024-11-20T13:53:31.718Z] Copying: 920/1024 [MB] (34 MBps) [2024-11-20T13:53:32.656Z] Copying: 954/1024 [MB] (33 MBps) [2024-11-20T13:53:33.593Z] Copying: 988/1024 [MB] (33 MBps) [2024-11-20T13:53:33.593Z] Copying: 1023/1024 [MB] (34 MBps) [2024-11-20T13:53:33.853Z] Copying: 1024/1024 [MB] (average 33 MBps)[2024-11-20 13:53:33.711778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.896 [2024-11-20 13:53:33.712123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:21.896 [2024-11-20 13:53:33.712408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:21.896 [2024-11-20 13:53:33.712474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.896 [2024-11-20 13:53:33.713156] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:21.896 [2024-11-20 13:53:33.719291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.896 [2024-11-20 13:53:33.719461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:21.896 [2024-11-20 13:53:33.719673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.930 ms 00:34:21.896 [2024-11-20 13:53:33.719731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.896 [2024-11-20 13:53:33.720117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.896 [2024-11-20 13:53:33.720252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:21.896 [2024-11-20 13:53:33.720397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:34:21.896 [2024-11-20 13:53:33.720456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.896 [2024-11-20 13:53:33.732601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.896 [2024-11-20 13:53:33.732785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:21.896 [2024-11-20 13:53:33.732892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.044 ms 00:34:21.896 [2024-11-20 13:53:33.732939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.896 [2024-11-20 13:53:33.738474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.896 [2024-11-20 13:53:33.738620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:21.896 [2024-11-20 13:53:33.738720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.473 ms 00:34:21.896 [2024-11-20 13:53:33.738757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.896 [2024-11-20 13:53:33.776595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.896 [2024-11-20 13:53:33.776772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:21.896 [2024-11-20 13:53:33.776852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.827 ms 00:34:21.896 [2024-11-20 13:53:33.776890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.896 [2024-11-20 13:53:33.798246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.896 [2024-11-20 13:53:33.798422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:21.896 [2024-11-20 13:53:33.798546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.294 ms 00:34:21.896 [2024-11-20 13:53:33.798586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.896 [2024-11-20 13:53:33.800487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.896 [2024-11-20 13:53:33.800626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:21.896 [2024-11-20 13:53:33.800699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.816 ms 00:34:21.896 [2024-11-20 13:53:33.800735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.896 [2024-11-20 13:53:33.837193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.896 [2024-11-20 13:53:33.837349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:21.896 [2024-11-20 13:53:33.837447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.430 ms 00:34:21.896 [2024-11-20 13:53:33.837488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.170 [2024-11-20 13:53:33.874480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.170 [2024-11-20 13:53:33.874677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:22.170 [2024-11-20 13:53:33.874823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.989 ms 00:34:22.170 [2024-11-20 13:53:33.874900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.170 [2024-11-20 13:53:33.910812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.170 [2024-11-20 13:53:33.910972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:22.170 [2024-11-20 13:53:33.911048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.901 ms 00:34:22.170 [2024-11-20 13:53:33.911084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.170 [2024-11-20 13:53:33.948158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.170 [2024-11-20 13:53:33.948385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:22.170 [2024-11-20 13:53:33.948411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.029 ms 00:34:22.170 [2024-11-20 13:53:33.948423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.170 [2024-11-20 13:53:33.948496] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:22.170 [2024-11-20 13:53:33.948516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:34:22.170 [2024-11-20 13:53:33.948530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:34:22.170 [2024-11-20 13:53:33.948543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:22.170 [2024-11-20 13:53:33.948974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.948986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.948998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:22.171 [2024-11-20 13:53:33.949720] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:22.171 [2024-11-20 13:53:33.949731] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: da2d074f-2de8-4281-ab42-5ee8adeba823 00:34:22.171 [2024-11-20 13:53:33.949743] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:34:22.171 [2024-11-20 13:53:33.949754] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 157632 00:34:22.171 [2024-11-20 13:53:33.949765] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 155648 00:34:22.171 [2024-11-20 13:53:33.949781] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0127 00:34:22.171 [2024-11-20 13:53:33.949792] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:22.171 [2024-11-20 13:53:33.949803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:22.171 [2024-11-20 13:53:33.949814] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:22.171 [2024-11-20 13:53:33.949837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:22.171 [2024-11-20 13:53:33.949847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:22.171 [2024-11-20 13:53:33.949858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.171 [2024-11-20 13:53:33.949869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:22.171 [2024-11-20 13:53:33.949881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.365 ms 00:34:22.171 [2024-11-20 13:53:33.949892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.171 [2024-11-20 13:53:33.970511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.171 [2024-11-20 13:53:33.970708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:22.171 [2024-11-20 13:53:33.970851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.606 ms 00:34:22.171 [2024-11-20 13:53:33.970889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.171 [2024-11-20 13:53:33.971435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.171 [2024-11-20 13:53:33.971523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:22.171 [2024-11-20 13:53:33.971589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:34:22.171 [2024-11-20 13:53:33.971641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.171 [2024-11-20 13:53:34.022645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.171 [2024-11-20 13:53:34.022835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:22.171 [2024-11-20 13:53:34.022911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.171 [2024-11-20 13:53:34.022948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.171 [2024-11-20 13:53:34.023044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.171 [2024-11-20 13:53:34.023078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:22.171 [2024-11-20 13:53:34.023110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.171 [2024-11-20 13:53:34.023141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.171 [2024-11-20 13:53:34.023321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.171 [2024-11-20 13:53:34.023366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:22.171 [2024-11-20 13:53:34.023398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.171 [2024-11-20 13:53:34.023483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.171 [2024-11-20 13:53:34.023531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.171 [2024-11-20 13:53:34.023564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:22.171 [2024-11-20 13:53:34.023595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.171 [2024-11-20 13:53:34.023672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.429 [2024-11-20 13:53:34.148855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.429 [2024-11-20 13:53:34.149057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:22.429 [2024-11-20 13:53:34.149206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.429 [2024-11-20 13:53:34.149245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.429 [2024-11-20 13:53:34.252176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.429 [2024-11-20 13:53:34.252356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:22.429 [2024-11-20 13:53:34.252492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.429 [2024-11-20 13:53:34.252530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.429 [2024-11-20 13:53:34.252718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.429 [2024-11-20 13:53:34.252772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:22.429 [2024-11-20 13:53:34.252858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.429 [2024-11-20 13:53:34.252894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.429 [2024-11-20 13:53:34.252985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.429 [2024-11-20 13:53:34.253062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:22.429 [2024-11-20 13:53:34.253078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.429 [2024-11-20 13:53:34.253089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.429 [2024-11-20 13:53:34.253213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.429 [2024-11-20 13:53:34.253226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:22.429 [2024-11-20 13:53:34.253242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.429 [2024-11-20 13:53:34.253253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.429 [2024-11-20 13:53:34.253291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.429 [2024-11-20 13:53:34.253304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:22.429 [2024-11-20 13:53:34.253314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.429 [2024-11-20 13:53:34.253325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.429 [2024-11-20 13:53:34.253366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.429 [2024-11-20 13:53:34.253378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:22.429 [2024-11-20 13:53:34.253388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.429 [2024-11-20 13:53:34.253403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.429 [2024-11-20 13:53:34.253448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:22.429 [2024-11-20 13:53:34.253461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:22.429 [2024-11-20 13:53:34.253471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:22.429 [2024-11-20 13:53:34.253481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.429 [2024-11-20 13:53:34.253645] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 542.808 ms, result 0 00:34:23.365 00:34:23.365 00:34:23.625 13:53:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:25.529 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:25.529 13:53:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:25.529 [2024-11-20 13:53:37.162828] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:34:25.529 [2024-11-20 13:53:37.162967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82952 ] 00:34:25.529 [2024-11-20 13:53:37.343253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:25.529 [2024-11-20 13:53:37.461909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:26.099 [2024-11-20 13:53:37.839287] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:26.099 [2024-11-20 13:53:37.839519] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:26.099 [2024-11-20 13:53:38.001930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.099 [2024-11-20 13:53:38.002147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:26.099 [2024-11-20 13:53:38.002179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:26.099 [2024-11-20 13:53:38.002191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.099 [2024-11-20 13:53:38.002304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.099 [2024-11-20 13:53:38.002328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:26.099 [2024-11-20 13:53:38.002354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:34:26.099 [2024-11-20 13:53:38.002367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.099 [2024-11-20 13:53:38.002396] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:26.099 [2024-11-20 13:53:38.003575] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:26.099 [2024-11-20 13:53:38.003622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.099 [2024-11-20 13:53:38.003634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:26.099 [2024-11-20 13:53:38.003645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.234 ms 00:34:26.099 [2024-11-20 13:53:38.003656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.099 [2024-11-20 13:53:38.005115] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:26.099 [2024-11-20 13:53:38.023771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.099 [2024-11-20 13:53:38.023922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:26.099 [2024-11-20 13:53:38.023946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.686 ms 00:34:26.099 [2024-11-20 13:53:38.023957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.099 [2024-11-20 13:53:38.024026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.099 [2024-11-20 13:53:38.024038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:26.099 [2024-11-20 13:53:38.024050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:34:26.099 [2024-11-20 13:53:38.024060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.099 [2024-11-20 13:53:38.030936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.099 [2024-11-20 13:53:38.031081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:26.099 [2024-11-20 13:53:38.031103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.810 ms 00:34:26.099 [2024-11-20 13:53:38.031120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.099 [2024-11-20 13:53:38.031207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.099 [2024-11-20 13:53:38.031220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:26.099 [2024-11-20 13:53:38.031231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:34:26.099 [2024-11-20 13:53:38.031241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.099 [2024-11-20 13:53:38.031286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.099 [2024-11-20 13:53:38.031298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:26.099 [2024-11-20 13:53:38.031309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:26.099 [2024-11-20 13:53:38.031319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.099 [2024-11-20 13:53:38.031347] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:26.099 [2024-11-20 13:53:38.036097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.099 [2024-11-20 13:53:38.036132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:26.099 [2024-11-20 13:53:38.036145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.766 ms 00:34:26.099 [2024-11-20 13:53:38.036159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.099 [2024-11-20 13:53:38.036190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.099 [2024-11-20 13:53:38.036201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:26.099 [2024-11-20 13:53:38.036212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:26.099 [2024-11-20 13:53:38.036222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.099 [2024-11-20 13:53:38.036276] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:26.099 [2024-11-20 13:53:38.036302] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:26.099 [2024-11-20 13:53:38.036337] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:26.099 [2024-11-20 13:53:38.036359] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:26.099 [2024-11-20 13:53:38.036454] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:26.099 [2024-11-20 13:53:38.036468] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:26.099 [2024-11-20 13:53:38.036481] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:26.099 [2024-11-20 13:53:38.036495] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:26.099 [2024-11-20 13:53:38.036507] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:26.099 [2024-11-20 13:53:38.036519] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:26.099 [2024-11-20 13:53:38.036530] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:26.099 [2024-11-20 13:53:38.036540] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:26.099 [2024-11-20 13:53:38.036554] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:26.099 [2024-11-20 13:53:38.036565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.099 [2024-11-20 13:53:38.036575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:26.099 [2024-11-20 13:53:38.036586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:34:26.099 [2024-11-20 13:53:38.036615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.099 [2024-11-20 13:53:38.036689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.099 [2024-11-20 13:53:38.036700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:26.099 [2024-11-20 13:53:38.036711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:34:26.099 [2024-11-20 13:53:38.036721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.099 [2024-11-20 13:53:38.036820] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:26.099 [2024-11-20 13:53:38.036835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:26.099 [2024-11-20 13:53:38.036846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:26.099 [2024-11-20 13:53:38.036857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:26.099 [2024-11-20 13:53:38.036867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:26.100 [2024-11-20 13:53:38.036877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:26.100 [2024-11-20 13:53:38.036886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:26.100 [2024-11-20 13:53:38.036896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:26.100 [2024-11-20 13:53:38.036906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:26.100 [2024-11-20 13:53:38.036915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:26.100 [2024-11-20 13:53:38.036925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:26.100 [2024-11-20 13:53:38.036935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:26.100 [2024-11-20 13:53:38.036944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:26.100 [2024-11-20 13:53:38.036954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:26.100 [2024-11-20 13:53:38.036963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:26.100 [2024-11-20 13:53:38.036982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:26.100 [2024-11-20 13:53:38.036992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:26.100 [2024-11-20 13:53:38.037001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:26.100 [2024-11-20 13:53:38.037011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:26.100 [2024-11-20 13:53:38.037020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:26.100 [2024-11-20 13:53:38.037030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:26.100 [2024-11-20 13:53:38.037040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:26.100 [2024-11-20 13:53:38.037050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:26.100 [2024-11-20 13:53:38.037059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:26.100 [2024-11-20 13:53:38.037068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:26.100 [2024-11-20 13:53:38.037077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:26.100 [2024-11-20 13:53:38.037086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:26.100 [2024-11-20 13:53:38.037096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:26.100 [2024-11-20 13:53:38.037105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:26.100 [2024-11-20 13:53:38.037115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:26.100 [2024-11-20 13:53:38.037124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:26.100 [2024-11-20 13:53:38.037134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:26.100 [2024-11-20 13:53:38.037144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:26.100 [2024-11-20 13:53:38.037153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:26.100 [2024-11-20 13:53:38.037161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:26.100 [2024-11-20 13:53:38.037170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:26.100 [2024-11-20 13:53:38.037179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:26.100 [2024-11-20 13:53:38.037189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:26.100 [2024-11-20 13:53:38.037197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:26.100 [2024-11-20 13:53:38.037206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:26.100 [2024-11-20 13:53:38.037215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:26.100 [2024-11-20 13:53:38.037225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:26.100 [2024-11-20 13:53:38.037233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:26.100 [2024-11-20 13:53:38.037243] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:26.100 [2024-11-20 13:53:38.037253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:26.100 [2024-11-20 13:53:38.037264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:26.100 [2024-11-20 13:53:38.037274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:26.100 [2024-11-20 13:53:38.037285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:26.100 [2024-11-20 13:53:38.037295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:26.100 [2024-11-20 13:53:38.037304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:26.100 [2024-11-20 13:53:38.037314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:26.100 [2024-11-20 13:53:38.037324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:26.100 [2024-11-20 13:53:38.037333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:26.100 [2024-11-20 13:53:38.037345] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:26.100 [2024-11-20 13:53:38.037358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:26.100 [2024-11-20 13:53:38.037370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:26.100 [2024-11-20 13:53:38.037381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:26.100 [2024-11-20 13:53:38.037393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:26.100 [2024-11-20 13:53:38.037403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:26.100 [2024-11-20 13:53:38.037414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:26.100 [2024-11-20 13:53:38.037425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:26.100 [2024-11-20 13:53:38.037435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:26.100 [2024-11-20 13:53:38.037446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:26.100 [2024-11-20 13:53:38.037456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:26.100 [2024-11-20 13:53:38.037467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:26.100 [2024-11-20 13:53:38.037478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:26.100 [2024-11-20 13:53:38.037488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:26.100 [2024-11-20 13:53:38.037498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:26.100 [2024-11-20 13:53:38.037509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:26.100 [2024-11-20 13:53:38.037519] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:26.100 [2024-11-20 13:53:38.037534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:26.100 [2024-11-20 13:53:38.037545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:26.100 [2024-11-20 13:53:38.037555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:26.100 [2024-11-20 13:53:38.037566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:26.100 [2024-11-20 13:53:38.037576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:26.100 [2024-11-20 13:53:38.037589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.100 [2024-11-20 13:53:38.037611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:26.100 [2024-11-20 13:53:38.037623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.824 ms 00:34:26.100 [2024-11-20 13:53:38.037633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.360 [2024-11-20 13:53:38.076625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.360 [2024-11-20 13:53:38.076672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:26.360 [2024-11-20 13:53:38.076688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.005 ms 00:34:26.360 [2024-11-20 13:53:38.076699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.360 [2024-11-20 13:53:38.076792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.360 [2024-11-20 13:53:38.076805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:26.360 [2024-11-20 13:53:38.076815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:34:26.360 [2024-11-20 13:53:38.076826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.360 [2024-11-20 13:53:38.135701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.360 [2024-11-20 13:53:38.135748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:26.360 [2024-11-20 13:53:38.135764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.904 ms 00:34:26.360 [2024-11-20 13:53:38.135775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.360 [2024-11-20 13:53:38.135822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.360 [2024-11-20 13:53:38.135834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:26.360 [2024-11-20 13:53:38.135849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:26.360 [2024-11-20 13:53:38.135859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.360 [2024-11-20 13:53:38.136354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.360 [2024-11-20 13:53:38.136368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:26.360 [2024-11-20 13:53:38.136380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:34:26.360 [2024-11-20 13:53:38.136391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.360 [2024-11-20 13:53:38.136511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.360 [2024-11-20 13:53:38.136526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:26.360 [2024-11-20 13:53:38.136537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:34:26.360 [2024-11-20 13:53:38.136553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.360 [2024-11-20 13:53:38.156639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.360 [2024-11-20 13:53:38.156681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:26.360 [2024-11-20 13:53:38.156701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.096 ms 00:34:26.360 [2024-11-20 13:53:38.156713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.360 [2024-11-20 13:53:38.175821] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:26.360 [2024-11-20 13:53:38.175860] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:26.360 [2024-11-20 13:53:38.175877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.360 [2024-11-20 13:53:38.175888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:26.360 [2024-11-20 13:53:38.175900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.079 ms 00:34:26.360 [2024-11-20 13:53:38.175911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.360 [2024-11-20 13:53:38.205568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.360 [2024-11-20 13:53:38.205620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:26.360 [2024-11-20 13:53:38.205646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.660 ms 00:34:26.360 [2024-11-20 13:53:38.205657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.360 [2024-11-20 13:53:38.223950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.360 [2024-11-20 13:53:38.223989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:26.360 [2024-11-20 13:53:38.224003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.275 ms 00:34:26.360 [2024-11-20 13:53:38.224013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.360 [2024-11-20 13:53:38.242647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.360 [2024-11-20 13:53:38.242686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:26.360 [2024-11-20 13:53:38.242700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.609 ms 00:34:26.360 [2024-11-20 13:53:38.242710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.360 [2024-11-20 13:53:38.243513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.360 [2024-11-20 13:53:38.243539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:26.360 [2024-11-20 13:53:38.243552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:34:26.360 [2024-11-20 13:53:38.243565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.620 [2024-11-20 13:53:38.330215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.620 [2024-11-20 13:53:38.330305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:26.620 [2024-11-20 13:53:38.330339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.767 ms 00:34:26.620 [2024-11-20 13:53:38.330351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.620 [2024-11-20 13:53:38.342094] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:26.620 [2024-11-20 13:53:38.345320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.620 [2024-11-20 13:53:38.345354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:26.620 [2024-11-20 13:53:38.345370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.931 ms 00:34:26.620 [2024-11-20 13:53:38.345382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.620 [2024-11-20 13:53:38.345488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.620 [2024-11-20 13:53:38.345503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:26.620 [2024-11-20 13:53:38.345515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:26.620 [2024-11-20 13:53:38.345529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.620 [2024-11-20 13:53:38.346447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.620 [2024-11-20 13:53:38.346481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:26.620 [2024-11-20 13:53:38.346494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.858 ms 00:34:26.620 [2024-11-20 13:53:38.346504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.620 [2024-11-20 13:53:38.346542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.620 [2024-11-20 13:53:38.346553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:26.620 [2024-11-20 13:53:38.346564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:26.620 [2024-11-20 13:53:38.346574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.620 [2024-11-20 13:53:38.346629] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:26.620 [2024-11-20 13:53:38.346643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.620 [2024-11-20 13:53:38.346653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:26.620 [2024-11-20 13:53:38.346664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:34:26.620 [2024-11-20 13:53:38.346674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.620 [2024-11-20 13:53:38.384287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.620 [2024-11-20 13:53:38.384331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:26.620 [2024-11-20 13:53:38.384347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.652 ms 00:34:26.620 [2024-11-20 13:53:38.384366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.620 [2024-11-20 13:53:38.384450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.620 [2024-11-20 13:53:38.384463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:26.620 [2024-11-20 13:53:38.384475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:34:26.620 [2024-11-20 13:53:38.384485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.621 [2024-11-20 13:53:38.385846] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 384.043 ms, result 0 00:34:28.021  [2024-11-20T13:53:40.915Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-20T13:53:41.852Z] Copying: 57/1024 [MB] (28 MBps) [2024-11-20T13:53:42.789Z] Copying: 85/1024 [MB] (27 MBps) [2024-11-20T13:53:43.726Z] Copying: 113/1024 [MB] (27 MBps) [2024-11-20T13:53:44.662Z] Copying: 141/1024 [MB] (28 MBps) [2024-11-20T13:53:45.686Z] Copying: 168/1024 [MB] (27 MBps) [2024-11-20T13:53:46.621Z] Copying: 197/1024 [MB] (28 MBps) [2024-11-20T13:53:48.000Z] Copying: 225/1024 [MB] (28 MBps) [2024-11-20T13:53:48.937Z] Copying: 253/1024 [MB] (27 MBps) [2024-11-20T13:53:49.874Z] Copying: 280/1024 [MB] (27 MBps) [2024-11-20T13:53:50.811Z] Copying: 308/1024 [MB] (27 MBps) [2024-11-20T13:53:51.749Z] Copying: 335/1024 [MB] (27 MBps) [2024-11-20T13:53:52.702Z] Copying: 362/1024 [MB] (27 MBps) [2024-11-20T13:53:53.647Z] Copying: 388/1024 [MB] (25 MBps) [2024-11-20T13:53:54.612Z] Copying: 414/1024 [MB] (25 MBps) [2024-11-20T13:53:55.986Z] Copying: 440/1024 [MB] (26 MBps) [2024-11-20T13:53:56.926Z] Copying: 466/1024 [MB] (26 MBps) [2024-11-20T13:53:57.862Z] Copying: 492/1024 [MB] (26 MBps) [2024-11-20T13:53:58.799Z] Copying: 519/1024 [MB] (26 MBps) [2024-11-20T13:53:59.735Z] Copying: 545/1024 [MB] (25 MBps) [2024-11-20T13:54:00.710Z] Copying: 571/1024 [MB] (26 MBps) [2024-11-20T13:54:01.646Z] Copying: 597/1024 [MB] (26 MBps) [2024-11-20T13:54:02.583Z] Copying: 623/1024 [MB] (26 MBps) [2024-11-20T13:54:03.960Z] Copying: 649/1024 [MB] (25 MBps) [2024-11-20T13:54:04.897Z] Copying: 675/1024 [MB] (26 MBps) [2024-11-20T13:54:05.833Z] Copying: 701/1024 [MB] (25 MBps) [2024-11-20T13:54:06.772Z] Copying: 727/1024 [MB] (26 MBps) [2024-11-20T13:54:07.796Z] Copying: 755/1024 [MB] (27 MBps) [2024-11-20T13:54:08.733Z] Copying: 784/1024 [MB] (29 MBps) [2024-11-20T13:54:09.669Z] Copying: 814/1024 [MB] (29 MBps) [2024-11-20T13:54:10.607Z] Copying: 843/1024 [MB] (29 MBps) [2024-11-20T13:54:11.987Z] Copying: 872/1024 [MB] (28 MBps) [2024-11-20T13:54:12.555Z] Copying: 899/1024 [MB] (27 MBps) [2024-11-20T13:54:13.934Z] Copying: 926/1024 [MB] (26 MBps) [2024-11-20T13:54:14.871Z] Copying: 953/1024 [MB] (26 MBps) [2024-11-20T13:54:15.813Z] Copying: 979/1024 [MB] (26 MBps) [2024-11-20T13:54:16.382Z] Copying: 1006/1024 [MB] (26 MBps) [2024-11-20T13:54:16.382Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-20 13:54:16.317455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.425 [2024-11-20 13:54:16.317548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:04.425 [2024-11-20 13:54:16.317577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:04.425 [2024-11-20 13:54:16.317615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.425 [2024-11-20 13:54:16.317658] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:04.425 [2024-11-20 13:54:16.324906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.425 [2024-11-20 13:54:16.324970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:04.425 [2024-11-20 13:54:16.324995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.228 ms 00:35:04.425 [2024-11-20 13:54:16.325009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.425 [2024-11-20 13:54:16.325298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.425 [2024-11-20 13:54:16.325320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:04.425 [2024-11-20 13:54:16.325334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:35:04.425 [2024-11-20 13:54:16.325346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.425 [2024-11-20 13:54:16.328939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.425 [2024-11-20 13:54:16.328971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:04.425 [2024-11-20 13:54:16.328986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.579 ms 00:35:04.425 [2024-11-20 13:54:16.328998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.425 [2024-11-20 13:54:16.335039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.425 [2024-11-20 13:54:16.335071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:04.425 [2024-11-20 13:54:16.335084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.020 ms 00:35:04.425 [2024-11-20 13:54:16.335095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.425 [2024-11-20 13:54:16.372727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.425 [2024-11-20 13:54:16.372776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:04.425 [2024-11-20 13:54:16.372792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.616 ms 00:35:04.425 [2024-11-20 13:54:16.372803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.686 [2024-11-20 13:54:16.394144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.686 [2024-11-20 13:54:16.394190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:04.686 [2024-11-20 13:54:16.394214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.344 ms 00:35:04.686 [2024-11-20 13:54:16.394226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.686 [2024-11-20 13:54:16.396235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.686 [2024-11-20 13:54:16.396280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:04.686 [2024-11-20 13:54:16.396294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.965 ms 00:35:04.686 [2024-11-20 13:54:16.396304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.686 [2024-11-20 13:54:16.433974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.686 [2024-11-20 13:54:16.434040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:04.686 [2024-11-20 13:54:16.434057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.713 ms 00:35:04.686 [2024-11-20 13:54:16.434068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.686 [2024-11-20 13:54:16.471091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.686 [2024-11-20 13:54:16.471154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:04.686 [2024-11-20 13:54:16.471170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.053 ms 00:35:04.686 [2024-11-20 13:54:16.471180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.686 [2024-11-20 13:54:16.507469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.686 [2024-11-20 13:54:16.507513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:04.686 [2024-11-20 13:54:16.507528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.320 ms 00:35:04.686 [2024-11-20 13:54:16.507539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.686 [2024-11-20 13:54:16.544268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.686 [2024-11-20 13:54:16.544313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:04.686 [2024-11-20 13:54:16.544329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.713 ms 00:35:04.686 [2024-11-20 13:54:16.544339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.686 [2024-11-20 13:54:16.544368] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:04.686 [2024-11-20 13:54:16.544386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:35:04.686 [2024-11-20 13:54:16.544406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:35:04.686 [2024-11-20 13:54:16.544419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:04.686 [2024-11-20 13:54:16.544898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.544909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.544919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.544931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.544942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.544952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.544963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.544973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.544983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.544994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:04.687 [2024-11-20 13:54:16.545491] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:04.687 [2024-11-20 13:54:16.545506] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: da2d074f-2de8-4281-ab42-5ee8adeba823 00:35:04.687 [2024-11-20 13:54:16.545518] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:35:04.687 [2024-11-20 13:54:16.545527] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:35:04.687 [2024-11-20 13:54:16.545537] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:35:04.687 [2024-11-20 13:54:16.545548] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:35:04.687 [2024-11-20 13:54:16.545557] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:04.687 [2024-11-20 13:54:16.545568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:04.687 [2024-11-20 13:54:16.545590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:04.687 [2024-11-20 13:54:16.545608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:04.687 [2024-11-20 13:54:16.545618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:04.687 [2024-11-20 13:54:16.545627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.687 [2024-11-20 13:54:16.545637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:04.687 [2024-11-20 13:54:16.545649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:35:04.687 [2024-11-20 13:54:16.545659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.687 [2024-11-20 13:54:16.565667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.687 [2024-11-20 13:54:16.565709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:04.687 [2024-11-20 13:54:16.565724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.995 ms 00:35:04.687 [2024-11-20 13:54:16.565735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.687 [2024-11-20 13:54:16.566299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.687 [2024-11-20 13:54:16.566321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:04.687 [2024-11-20 13:54:16.566339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:35:04.687 [2024-11-20 13:54:16.566350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.687 [2024-11-20 13:54:16.617443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.687 [2024-11-20 13:54:16.617490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:04.687 [2024-11-20 13:54:16.617507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.687 [2024-11-20 13:54:16.617518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.687 [2024-11-20 13:54:16.617606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.687 [2024-11-20 13:54:16.617625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:04.687 [2024-11-20 13:54:16.617643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.687 [2024-11-20 13:54:16.617654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.687 [2024-11-20 13:54:16.617736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.687 [2024-11-20 13:54:16.617750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:04.687 [2024-11-20 13:54:16.617761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.687 [2024-11-20 13:54:16.617772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.687 [2024-11-20 13:54:16.617791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.687 [2024-11-20 13:54:16.617802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:04.687 [2024-11-20 13:54:16.617813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.687 [2024-11-20 13:54:16.617828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.947 [2024-11-20 13:54:16.743311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.947 [2024-11-20 13:54:16.743405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:04.948 [2024-11-20 13:54:16.743422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.948 [2024-11-20 13:54:16.743434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.948 [2024-11-20 13:54:16.845343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.948 [2024-11-20 13:54:16.845412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:04.948 [2024-11-20 13:54:16.845435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.948 [2024-11-20 13:54:16.845446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.948 [2024-11-20 13:54:16.845545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.948 [2024-11-20 13:54:16.845557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:04.948 [2024-11-20 13:54:16.845568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.948 [2024-11-20 13:54:16.845578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.948 [2024-11-20 13:54:16.845639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.948 [2024-11-20 13:54:16.845652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:04.948 [2024-11-20 13:54:16.845663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.948 [2024-11-20 13:54:16.845674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.948 [2024-11-20 13:54:16.845785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.948 [2024-11-20 13:54:16.845798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:04.948 [2024-11-20 13:54:16.845809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.948 [2024-11-20 13:54:16.845820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.948 [2024-11-20 13:54:16.845856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.948 [2024-11-20 13:54:16.845869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:04.948 [2024-11-20 13:54:16.845880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.948 [2024-11-20 13:54:16.845890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.948 [2024-11-20 13:54:16.845939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.948 [2024-11-20 13:54:16.845951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:04.948 [2024-11-20 13:54:16.845961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.948 [2024-11-20 13:54:16.845971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.948 [2024-11-20 13:54:16.846015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:04.948 [2024-11-20 13:54:16.846027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:04.948 [2024-11-20 13:54:16.846037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:04.948 [2024-11-20 13:54:16.846048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.948 [2024-11-20 13:54:16.846213] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 529.575 ms, result 0 00:35:06.328 00:35:06.328 00:35:06.328 13:54:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:35:07.742 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:35:07.742 13:54:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:35:07.742 13:54:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:35:07.742 13:54:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:07.742 13:54:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:08.002 13:54:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:35:08.002 13:54:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:08.261 13:54:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:35:08.261 13:54:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81256 00:35:08.261 13:54:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81256 ']' 00:35:08.261 13:54:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81256 00:35:08.261 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81256) - No such process 00:35:08.261 Process with pid 81256 is not found 00:35:08.262 13:54:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81256 is not found' 00:35:08.262 13:54:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:35:08.521 13:54:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:35:08.521 Remove shared memory files 00:35:08.521 13:54:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:08.521 13:54:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:35:08.521 13:54:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:35:08.521 13:54:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:35:08.521 13:54:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:08.521 13:54:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:35:08.521 00:35:08.521 real 3m26.409s 00:35:08.521 user 3m52.682s 00:35:08.521 sys 0m37.700s 00:35:08.521 13:54:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:08.521 13:54:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:08.521 ************************************ 00:35:08.521 END TEST ftl_dirty_shutdown 00:35:08.521 ************************************ 00:35:08.521 13:54:20 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:35:08.521 13:54:20 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:08.521 13:54:20 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:08.521 13:54:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:08.521 ************************************ 00:35:08.521 START TEST ftl_upgrade_shutdown 00:35:08.521 ************************************ 00:35:08.521 13:54:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:35:08.781 * Looking for test storage... 00:35:08.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:08.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.781 --rc genhtml_branch_coverage=1 00:35:08.781 --rc genhtml_function_coverage=1 00:35:08.781 --rc genhtml_legend=1 00:35:08.781 --rc geninfo_all_blocks=1 00:35:08.781 --rc geninfo_unexecuted_blocks=1 00:35:08.781 00:35:08.781 ' 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:08.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.781 --rc genhtml_branch_coverage=1 00:35:08.781 --rc genhtml_function_coverage=1 00:35:08.781 --rc genhtml_legend=1 00:35:08.781 --rc geninfo_all_blocks=1 00:35:08.781 --rc geninfo_unexecuted_blocks=1 00:35:08.781 00:35:08.781 ' 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:08.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.781 --rc genhtml_branch_coverage=1 00:35:08.781 --rc genhtml_function_coverage=1 00:35:08.781 --rc genhtml_legend=1 00:35:08.781 --rc geninfo_all_blocks=1 00:35:08.781 --rc geninfo_unexecuted_blocks=1 00:35:08.781 00:35:08.781 ' 00:35:08.781 13:54:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:08.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.781 --rc genhtml_branch_coverage=1 00:35:08.781 --rc genhtml_function_coverage=1 00:35:08.782 --rc genhtml_legend=1 00:35:08.782 --rc geninfo_all_blocks=1 00:35:08.782 --rc geninfo_unexecuted_blocks=1 00:35:08.782 00:35:08.782 ' 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83453 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83453 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83453 ']' 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:08.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:08.782 13:54:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:09.042 [2024-11-20 13:54:20.745296] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:35:09.042 [2024-11-20 13:54:20.745419] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83453 ] 00:35:09.042 [2024-11-20 13:54:20.927762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.301 [2024-11-20 13:54:21.041176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:35:10.240 13:54:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:35:10.499 13:54:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:35:10.499 13:54:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:35:10.499 13:54:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:35:10.499 13:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:35:10.499 13:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:35:10.499 13:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:35:10.499 13:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:35:10.499 13:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:35:10.499 13:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:35:10.499 { 00:35:10.499 "name": "basen1", 00:35:10.499 "aliases": [ 00:35:10.499 "ff5fca35-3bfc-4bdf-acae-6e45538832f5" 00:35:10.499 ], 00:35:10.499 "product_name": "NVMe disk", 00:35:10.499 "block_size": 4096, 00:35:10.499 "num_blocks": 1310720, 00:35:10.499 "uuid": "ff5fca35-3bfc-4bdf-acae-6e45538832f5", 00:35:10.499 "numa_id": -1, 00:35:10.499 "assigned_rate_limits": { 00:35:10.499 "rw_ios_per_sec": 0, 00:35:10.499 "rw_mbytes_per_sec": 0, 00:35:10.499 "r_mbytes_per_sec": 0, 00:35:10.499 "w_mbytes_per_sec": 0 00:35:10.499 }, 00:35:10.499 "claimed": true, 00:35:10.499 "claim_type": "read_many_write_one", 00:35:10.499 "zoned": false, 00:35:10.499 "supported_io_types": { 00:35:10.499 "read": true, 00:35:10.499 "write": true, 00:35:10.499 "unmap": true, 00:35:10.499 "flush": true, 00:35:10.499 "reset": true, 00:35:10.499 "nvme_admin": true, 00:35:10.499 "nvme_io": true, 00:35:10.499 "nvme_io_md": false, 00:35:10.499 "write_zeroes": true, 00:35:10.499 "zcopy": false, 00:35:10.499 "get_zone_info": false, 00:35:10.499 "zone_management": false, 00:35:10.499 "zone_append": false, 00:35:10.499 "compare": true, 00:35:10.499 "compare_and_write": false, 00:35:10.499 "abort": true, 00:35:10.499 "seek_hole": false, 00:35:10.499 "seek_data": false, 00:35:10.499 "copy": true, 00:35:10.499 "nvme_iov_md": false 00:35:10.499 }, 00:35:10.499 "driver_specific": { 00:35:10.499 "nvme": [ 00:35:10.499 { 00:35:10.499 "pci_address": "0000:00:11.0", 00:35:10.499 "trid": { 00:35:10.499 "trtype": "PCIe", 00:35:10.499 "traddr": "0000:00:11.0" 00:35:10.499 }, 00:35:10.499 "ctrlr_data": { 00:35:10.499 "cntlid": 0, 00:35:10.499 "vendor_id": "0x1b36", 00:35:10.499 "model_number": "QEMU NVMe Ctrl", 00:35:10.499 "serial_number": "12341", 00:35:10.499 "firmware_revision": "8.0.0", 00:35:10.499 "subnqn": "nqn.2019-08.org.qemu:12341", 00:35:10.499 "oacs": { 00:35:10.499 "security": 0, 00:35:10.499 "format": 1, 00:35:10.499 "firmware": 0, 00:35:10.499 "ns_manage": 1 00:35:10.499 }, 00:35:10.499 "multi_ctrlr": false, 00:35:10.499 "ana_reporting": false 00:35:10.499 }, 00:35:10.499 "vs": { 00:35:10.499 "nvme_version": "1.4" 00:35:10.499 }, 00:35:10.499 "ns_data": { 00:35:10.499 "id": 1, 00:35:10.499 "can_share": false 00:35:10.499 } 00:35:10.499 } 00:35:10.499 ], 00:35:10.499 "mp_policy": "active_passive" 00:35:10.499 } 00:35:10.499 } 00:35:10.499 ]' 00:35:10.499 13:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:35:10.758 13:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:35:10.758 13:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:35:10.758 13:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:35:10.758 13:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:35:10.758 13:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:35:10.758 13:54:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:35:10.758 13:54:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:35:10.758 13:54:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:35:10.758 13:54:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:10.758 13:54:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:11.017 13:54:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=835401ba-817e-4c6d-b532-9f526eee1ddd 00:35:11.017 13:54:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:35:11.017 13:54:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 835401ba-817e-4c6d-b532-9f526eee1ddd 00:35:11.276 13:54:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:35:11.276 13:54:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=6b34bd00-d88d-4ee3-804f-136c1cfa5fcb 00:35:11.276 13:54:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 6b34bd00-d88d-4ee3-804f-136c1cfa5fcb 00:35:11.535 13:54:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=5a2f5384-48e6-4086-8556-744beb01126a 00:35:11.535 13:54:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 5a2f5384-48e6-4086-8556-744beb01126a ]] 00:35:11.535 13:54:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 5a2f5384-48e6-4086-8556-744beb01126a 5120 00:35:11.535 13:54:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:35:11.535 13:54:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:35:11.535 13:54:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=5a2f5384-48e6-4086-8556-744beb01126a 00:35:11.535 13:54:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:35:11.535 13:54:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5a2f5384-48e6-4086-8556-744beb01126a 00:35:11.535 13:54:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5a2f5384-48e6-4086-8556-744beb01126a 00:35:11.535 13:54:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:35:11.535 13:54:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:35:11.535 13:54:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:35:11.535 13:54:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5a2f5384-48e6-4086-8556-744beb01126a 00:35:11.793 13:54:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:35:11.793 { 00:35:11.793 "name": "5a2f5384-48e6-4086-8556-744beb01126a", 00:35:11.793 "aliases": [ 00:35:11.793 "lvs/basen1p0" 00:35:11.793 ], 00:35:11.793 "product_name": "Logical Volume", 00:35:11.793 "block_size": 4096, 00:35:11.793 "num_blocks": 5242880, 00:35:11.793 "uuid": "5a2f5384-48e6-4086-8556-744beb01126a", 00:35:11.793 "assigned_rate_limits": { 00:35:11.793 "rw_ios_per_sec": 0, 00:35:11.793 "rw_mbytes_per_sec": 0, 00:35:11.793 "r_mbytes_per_sec": 0, 00:35:11.793 "w_mbytes_per_sec": 0 00:35:11.793 }, 00:35:11.793 "claimed": false, 00:35:11.793 "zoned": false, 00:35:11.793 "supported_io_types": { 00:35:11.793 "read": true, 00:35:11.793 "write": true, 00:35:11.793 "unmap": true, 00:35:11.793 "flush": false, 00:35:11.793 "reset": true, 00:35:11.793 "nvme_admin": false, 00:35:11.793 "nvme_io": false, 00:35:11.793 "nvme_io_md": false, 00:35:11.793 "write_zeroes": true, 00:35:11.793 "zcopy": false, 00:35:11.793 "get_zone_info": false, 00:35:11.793 "zone_management": false, 00:35:11.793 "zone_append": false, 00:35:11.793 "compare": false, 00:35:11.793 "compare_and_write": false, 00:35:11.793 "abort": false, 00:35:11.793 "seek_hole": true, 00:35:11.793 "seek_data": true, 00:35:11.793 "copy": false, 00:35:11.793 "nvme_iov_md": false 00:35:11.793 }, 00:35:11.793 "driver_specific": { 00:35:11.793 "lvol": { 00:35:11.793 "lvol_store_uuid": "6b34bd00-d88d-4ee3-804f-136c1cfa5fcb", 00:35:11.793 "base_bdev": "basen1", 00:35:11.793 "thin_provision": true, 00:35:11.793 "num_allocated_clusters": 0, 00:35:11.793 "snapshot": false, 00:35:11.793 "clone": false, 00:35:11.793 "esnap_clone": false 00:35:11.793 } 00:35:11.793 } 00:35:11.793 } 00:35:11.793 ]' 00:35:11.793 13:54:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:35:11.793 13:54:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:35:11.794 13:54:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:35:11.794 13:54:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:35:11.794 13:54:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:35:11.794 13:54:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:35:11.794 13:54:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:35:11.794 13:54:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:35:11.794 13:54:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:35:12.053 13:54:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:35:12.053 13:54:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:35:12.053 13:54:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:35:12.312 13:54:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:35:12.312 13:54:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:35:12.312 13:54:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 5a2f5384-48e6-4086-8556-744beb01126a -c cachen1p0 --l2p_dram_limit 2 00:35:12.572 [2024-11-20 13:54:24.384269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.572 [2024-11-20 13:54:24.384334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:35:12.572 [2024-11-20 13:54:24.384355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:35:12.572 [2024-11-20 13:54:24.384366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.572 [2024-11-20 13:54:24.384451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.572 [2024-11-20 13:54:24.384464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:35:12.572 [2024-11-20 13:54:24.384477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:35:12.572 [2024-11-20 13:54:24.384487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.572 [2024-11-20 13:54:24.384511] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:35:12.572 [2024-11-20 13:54:24.385833] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:35:12.572 [2024-11-20 13:54:24.385885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.572 [2024-11-20 13:54:24.385896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:35:12.572 [2024-11-20 13:54:24.385918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.368 ms 00:35:12.572 [2024-11-20 13:54:24.385936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.572 [2024-11-20 13:54:24.386025] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 70390623-5db4-4a53-b61f-90066da92760 00:35:12.572 [2024-11-20 13:54:24.388001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.572 [2024-11-20 13:54:24.388085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:35:12.572 [2024-11-20 13:54:24.388102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:35:12.572 [2024-11-20 13:54:24.388115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.572 [2024-11-20 13:54:24.395677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.572 [2024-11-20 13:54:24.395721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:35:12.572 [2024-11-20 13:54:24.395734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.460 ms 00:35:12.572 [2024-11-20 13:54:24.395747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.572 [2024-11-20 13:54:24.395796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.572 [2024-11-20 13:54:24.395812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:35:12.572 [2024-11-20 13:54:24.395824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:35:12.572 [2024-11-20 13:54:24.395839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.572 [2024-11-20 13:54:24.395882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.572 [2024-11-20 13:54:24.395896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:35:12.572 [2024-11-20 13:54:24.395907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:35:12.572 [2024-11-20 13:54:24.395926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.572 [2024-11-20 13:54:24.395952] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:35:12.572 [2024-11-20 13:54:24.401542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.572 [2024-11-20 13:54:24.401581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:35:12.572 [2024-11-20 13:54:24.401624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.604 ms 00:35:12.572 [2024-11-20 13:54:24.401636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.572 [2024-11-20 13:54:24.401668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.572 [2024-11-20 13:54:24.401681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:35:12.572 [2024-11-20 13:54:24.401695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:35:12.572 [2024-11-20 13:54:24.401705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.572 [2024-11-20 13:54:24.401759] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:35:12.572 [2024-11-20 13:54:24.401890] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:35:12.572 [2024-11-20 13:54:24.401911] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:35:12.572 [2024-11-20 13:54:24.401926] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:35:12.572 [2024-11-20 13:54:24.401943] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:35:12.572 [2024-11-20 13:54:24.401955] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:35:12.572 [2024-11-20 13:54:24.401968] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:35:12.572 [2024-11-20 13:54:24.401979] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:35:12.572 [2024-11-20 13:54:24.401995] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:35:12.572 [2024-11-20 13:54:24.402005] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:35:12.572 [2024-11-20 13:54:24.402018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.572 [2024-11-20 13:54:24.402029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:35:12.572 [2024-11-20 13:54:24.402043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.261 ms 00:35:12.572 [2024-11-20 13:54:24.402053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.572 [2024-11-20 13:54:24.402127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.572 [2024-11-20 13:54:24.402138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:35:12.572 [2024-11-20 13:54:24.402153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:35:12.572 [2024-11-20 13:54:24.402173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.572 [2024-11-20 13:54:24.402288] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:35:12.572 [2024-11-20 13:54:24.402306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:35:12.572 [2024-11-20 13:54:24.402320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:12.572 [2024-11-20 13:54:24.402332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.572 [2024-11-20 13:54:24.402344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:35:12.572 [2024-11-20 13:54:24.402354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:35:12.572 [2024-11-20 13:54:24.402366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:35:12.572 [2024-11-20 13:54:24.402376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:35:12.572 [2024-11-20 13:54:24.402388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:35:12.572 [2024-11-20 13:54:24.402397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.572 [2024-11-20 13:54:24.402409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:35:12.572 [2024-11-20 13:54:24.402418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:35:12.572 [2024-11-20 13:54:24.402430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.572 [2024-11-20 13:54:24.402439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:35:12.573 [2024-11-20 13:54:24.402450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:35:12.573 [2024-11-20 13:54:24.402459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.573 [2024-11-20 13:54:24.402476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:35:12.573 [2024-11-20 13:54:24.402487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:35:12.573 [2024-11-20 13:54:24.402500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.573 [2024-11-20 13:54:24.402510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:35:12.573 [2024-11-20 13:54:24.402522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:35:12.573 [2024-11-20 13:54:24.402531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:12.573 [2024-11-20 13:54:24.402543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:35:12.573 [2024-11-20 13:54:24.402552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:35:12.573 [2024-11-20 13:54:24.402564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:12.573 [2024-11-20 13:54:24.402574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:35:12.573 [2024-11-20 13:54:24.402585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:35:12.573 [2024-11-20 13:54:24.402595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:12.573 [2024-11-20 13:54:24.402619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:35:12.573 [2024-11-20 13:54:24.402629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:35:12.573 [2024-11-20 13:54:24.402640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:12.573 [2024-11-20 13:54:24.402649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:35:12.573 [2024-11-20 13:54:24.402664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:35:12.573 [2024-11-20 13:54:24.402673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.573 [2024-11-20 13:54:24.402685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:35:12.573 [2024-11-20 13:54:24.402694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:35:12.573 [2024-11-20 13:54:24.402706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.573 [2024-11-20 13:54:24.402715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:35:12.573 [2024-11-20 13:54:24.402726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:35:12.573 [2024-11-20 13:54:24.402736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.573 [2024-11-20 13:54:24.402747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:35:12.573 [2024-11-20 13:54:24.402756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:35:12.573 [2024-11-20 13:54:24.402767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.573 [2024-11-20 13:54:24.402776] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:35:12.573 [2024-11-20 13:54:24.402789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:35:12.573 [2024-11-20 13:54:24.402799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:12.573 [2024-11-20 13:54:24.402813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:12.573 [2024-11-20 13:54:24.402823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:35:12.573 [2024-11-20 13:54:24.402839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:35:12.573 [2024-11-20 13:54:24.402849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:35:12.573 [2024-11-20 13:54:24.402861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:35:12.573 [2024-11-20 13:54:24.402870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:35:12.573 [2024-11-20 13:54:24.402882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:35:12.573 [2024-11-20 13:54:24.402897] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:35:12.573 [2024-11-20 13:54:24.402913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:12.573 [2024-11-20 13:54:24.402928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:35:12.573 [2024-11-20 13:54:24.402942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:35:12.573 [2024-11-20 13:54:24.402952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:35:12.573 [2024-11-20 13:54:24.402966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:35:12.573 [2024-11-20 13:54:24.402976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:35:12.573 [2024-11-20 13:54:24.402989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:35:12.573 [2024-11-20 13:54:24.402999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:35:12.573 [2024-11-20 13:54:24.403012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:35:12.573 [2024-11-20 13:54:24.403023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:35:12.573 [2024-11-20 13:54:24.403038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:35:12.573 [2024-11-20 13:54:24.403048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:35:12.573 [2024-11-20 13:54:24.403061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:35:12.573 [2024-11-20 13:54:24.403071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:35:12.573 [2024-11-20 13:54:24.403085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:35:12.573 [2024-11-20 13:54:24.403095] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:35:12.573 [2024-11-20 13:54:24.403109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:12.573 [2024-11-20 13:54:24.403121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:12.573 [2024-11-20 13:54:24.403133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:35:12.573 [2024-11-20 13:54:24.403143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:35:12.573 [2024-11-20 13:54:24.403156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:35:12.573 [2024-11-20 13:54:24.403167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:12.573 [2024-11-20 13:54:24.403179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:35:12.573 [2024-11-20 13:54:24.403190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.946 ms 00:35:12.573 [2024-11-20 13:54:24.403202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:12.573 [2024-11-20 13:54:24.403245] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:35:12.573 [2024-11-20 13:54:24.403264] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:35:15.861 [2024-11-20 13:54:27.727745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.861 [2024-11-20 13:54:27.727822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:35:15.861 [2024-11-20 13:54:27.727841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3329.896 ms 00:35:15.861 [2024-11-20 13:54:27.727855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.861 [2024-11-20 13:54:27.766580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.861 [2024-11-20 13:54:27.766649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:35:15.861 [2024-11-20 13:54:27.766666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.517 ms 00:35:15.861 [2024-11-20 13:54:27.766680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.861 [2024-11-20 13:54:27.766779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.861 [2024-11-20 13:54:27.766795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:35:15.861 [2024-11-20 13:54:27.766807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:35:15.861 [2024-11-20 13:54:27.766827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.861 [2024-11-20 13:54:27.815421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.861 [2024-11-20 13:54:27.815474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:35:15.861 [2024-11-20 13:54:27.815489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.629 ms 00:35:15.861 [2024-11-20 13:54:27.815503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.861 [2024-11-20 13:54:27.815544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.861 [2024-11-20 13:54:27.815563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:35:15.861 [2024-11-20 13:54:27.815574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:15.861 [2024-11-20 13:54:27.815587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.861 [2024-11-20 13:54:27.816118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.861 [2024-11-20 13:54:27.816142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:35:15.861 [2024-11-20 13:54:27.816154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.439 ms 00:35:15.861 [2024-11-20 13:54:27.816166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:15.861 [2024-11-20 13:54:27.816217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:15.861 [2024-11-20 13:54:27.816232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:35:15.861 [2024-11-20 13:54:27.816245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:35:15.861 [2024-11-20 13:54:27.816261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.120 [2024-11-20 13:54:27.838090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.120 [2024-11-20 13:54:27.838138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:35:16.120 [2024-11-20 13:54:27.838153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.844 ms 00:35:16.120 [2024-11-20 13:54:27.838183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.120 [2024-11-20 13:54:27.861731] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:35:16.120 [2024-11-20 13:54:27.862907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.120 [2024-11-20 13:54:27.862941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:35:16.120 [2024-11-20 13:54:27.862961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.659 ms 00:35:16.120 [2024-11-20 13:54:27.862974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.120 [2024-11-20 13:54:27.894305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.120 [2024-11-20 13:54:27.894349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:35:16.120 [2024-11-20 13:54:27.894368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.343 ms 00:35:16.120 [2024-11-20 13:54:27.894379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.120 [2024-11-20 13:54:27.894458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.120 [2024-11-20 13:54:27.894474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:35:16.120 [2024-11-20 13:54:27.894491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:35:16.120 [2024-11-20 13:54:27.894502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.120 [2024-11-20 13:54:27.930609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.120 [2024-11-20 13:54:27.930780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:35:16.120 [2024-11-20 13:54:27.930810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.109 ms 00:35:16.120 [2024-11-20 13:54:27.930823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.120 [2024-11-20 13:54:27.966654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.120 [2024-11-20 13:54:27.966697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:35:16.120 [2024-11-20 13:54:27.966716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.849 ms 00:35:16.120 [2024-11-20 13:54:27.966727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.120 [2024-11-20 13:54:27.967474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.120 [2024-11-20 13:54:27.967494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:35:16.120 [2024-11-20 13:54:27.967508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.718 ms 00:35:16.120 [2024-11-20 13:54:27.967522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.120 [2024-11-20 13:54:28.069125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.120 [2024-11-20 13:54:28.069193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:35:16.120 [2024-11-20 13:54:28.069220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 101.699 ms 00:35:16.120 [2024-11-20 13:54:28.069232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.380 [2024-11-20 13:54:28.108352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.380 [2024-11-20 13:54:28.108418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:35:16.380 [2024-11-20 13:54:28.108467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.080 ms 00:35:16.380 [2024-11-20 13:54:28.108479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.380 [2024-11-20 13:54:28.145439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.380 [2024-11-20 13:54:28.145499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:35:16.380 [2024-11-20 13:54:28.145519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.962 ms 00:35:16.380 [2024-11-20 13:54:28.145530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.380 [2024-11-20 13:54:28.183581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.380 [2024-11-20 13:54:28.183670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:35:16.380 [2024-11-20 13:54:28.183692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.053 ms 00:35:16.380 [2024-11-20 13:54:28.183702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.380 [2024-11-20 13:54:28.183765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.380 [2024-11-20 13:54:28.183778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:35:16.380 [2024-11-20 13:54:28.183797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:35:16.380 [2024-11-20 13:54:28.183807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.380 [2024-11-20 13:54:28.183923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:16.380 [2024-11-20 13:54:28.183936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:35:16.380 [2024-11-20 13:54:28.183953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:35:16.380 [2024-11-20 13:54:28.183963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:16.380 [2024-11-20 13:54:28.185008] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3806.441 ms, result 0 00:35:16.380 { 00:35:16.380 "name": "ftl", 00:35:16.380 "uuid": "70390623-5db4-4a53-b61f-90066da92760" 00:35:16.380 } 00:35:16.380 13:54:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:35:16.638 [2024-11-20 13:54:28.407883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:16.638 13:54:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:35:16.896 13:54:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:35:16.896 [2024-11-20 13:54:28.811909] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:35:16.896 13:54:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:35:17.155 [2024-11-20 13:54:29.013433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:17.155 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:35:17.725 Fill FTL, iteration 1 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83581 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83581 /var/tmp/spdk.tgt.sock 00:35:17.725 13:54:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83581 ']' 00:35:17.726 13:54:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:35:17.726 13:54:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:17.726 13:54:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:35:17.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:35:17.726 13:54:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:17.726 13:54:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:17.726 [2024-11-20 13:54:29.486108] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:35:17.726 [2024-11-20 13:54:29.486255] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83581 ] 00:35:17.726 [2024-11-20 13:54:29.651909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.985 [2024-11-20 13:54:29.791068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.019 13:54:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.019 13:54:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:35:19.019 13:54:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:35:19.019 ftln1 00:35:19.019 13:54:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:35:19.019 13:54:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:35:19.278 13:54:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:35:19.278 13:54:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83581 00:35:19.278 13:54:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83581 ']' 00:35:19.278 13:54:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83581 00:35:19.278 13:54:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:35:19.278 13:54:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:19.278 13:54:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83581 00:35:19.278 killing process with pid 83581 00:35:19.278 13:54:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:19.278 13:54:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:19.278 13:54:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83581' 00:35:19.278 13:54:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83581 00:35:19.278 13:54:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83581 00:35:21.811 13:54:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:35:21.811 13:54:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:35:21.811 [2024-11-20 13:54:33.625675] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:35:21.811 [2024-11-20 13:54:33.625805] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83635 ] 00:35:22.071 [2024-11-20 13:54:33.808051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.071 [2024-11-20 13:54:33.927888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:23.448  [2024-11-20T13:54:36.782Z] Copying: 251/1024 [MB] (251 MBps) [2024-11-20T13:54:37.736Z] Copying: 499/1024 [MB] (248 MBps) [2024-11-20T13:54:38.700Z] Copying: 735/1024 [MB] (236 MBps) [2024-11-20T13:54:38.700Z] Copying: 969/1024 [MB] (234 MBps) [2024-11-20T13:54:40.078Z] Copying: 1024/1024 [MB] (average 241 MBps) 00:35:28.121 00:35:28.121 Calculate MD5 checksum, iteration 1 00:35:28.121 13:54:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:35:28.121 13:54:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:35:28.121 13:54:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:28.121 13:54:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:28.121 13:54:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:28.121 13:54:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:28.121 13:54:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:28.121 13:54:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:28.121 [2024-11-20 13:54:39.887232] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:35:28.121 [2024-11-20 13:54:39.887372] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83699 ] 00:35:28.121 [2024-11-20 13:54:40.058307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.381 [2024-11-20 13:54:40.186345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:29.761  [2024-11-20T13:54:42.287Z] Copying: 681/1024 [MB] (681 MBps) [2024-11-20T13:54:43.225Z] Copying: 1024/1024 [MB] (average 660 MBps) 00:35:31.268 00:35:31.268 13:54:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:35:31.268 13:54:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:33.176 13:54:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:35:33.176 Fill FTL, iteration 2 00:35:33.176 13:54:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=682a7c4557960e23a7e1054c4957b380 00:35:33.176 13:54:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:35:33.176 13:54:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:35:33.176 13:54:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:35:33.176 13:54:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:35:33.176 13:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:33.176 13:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:33.176 13:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:33.176 13:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:33.176 13:54:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:35:33.176 [2024-11-20 13:54:44.994615] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:35:33.176 [2024-11-20 13:54:44.994919] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83755 ] 00:35:33.435 [2024-11-20 13:54:45.165256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.435 [2024-11-20 13:54:45.283484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:34.812  [2024-11-20T13:54:48.148Z] Copying: 249/1024 [MB] (249 MBps) [2024-11-20T13:54:49.086Z] Copying: 484/1024 [MB] (235 MBps) [2024-11-20T13:54:50.023Z] Copying: 716/1024 [MB] (232 MBps) [2024-11-20T13:54:50.283Z] Copying: 945/1024 [MB] (229 MBps) [2024-11-20T13:54:51.662Z] Copying: 1024/1024 [MB] (average 232 MBps) 00:35:39.705 00:35:39.705 13:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:35:39.705 13:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:35:39.705 Calculate MD5 checksum, iteration 2 00:35:39.705 13:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:39.705 13:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:39.705 13:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:39.705 13:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:39.705 13:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:39.705 13:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:39.705 [2024-11-20 13:54:51.392276] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:35:39.705 [2024-11-20 13:54:51.392408] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83823 ] 00:35:39.705 [2024-11-20 13:54:51.576346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.965 [2024-11-20 13:54:51.696381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:41.871  [2024-11-20T13:54:54.086Z] Copying: 695/1024 [MB] (695 MBps) [2024-11-20T13:54:55.463Z] Copying: 1024/1024 [MB] (average 677 MBps) 00:35:43.506 00:35:43.506 13:54:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:35:43.506 13:54:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:45.409 13:54:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:35:45.409 13:54:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=dbfdfc1c02acec90456ae5547df2be1e 00:35:45.409 13:54:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:35:45.409 13:54:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:35:45.409 13:54:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:35:45.409 [2024-11-20 13:54:57.048855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:45.409 [2024-11-20 13:54:57.048914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:45.409 [2024-11-20 13:54:57.048932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:35:45.409 [2024-11-20 13:54:57.048943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:45.409 [2024-11-20 13:54:57.048975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:45.410 [2024-11-20 13:54:57.048987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:45.410 [2024-11-20 13:54:57.049003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:45.410 [2024-11-20 13:54:57.049013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:45.410 [2024-11-20 13:54:57.049034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:45.410 [2024-11-20 13:54:57.049046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:45.410 [2024-11-20 13:54:57.049057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:45.410 [2024-11-20 13:54:57.049067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:45.410 [2024-11-20 13:54:57.049132] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.273 ms, result 0 00:35:45.410 true 00:35:45.410 13:54:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:45.410 { 00:35:45.410 "name": "ftl", 00:35:45.410 "properties": [ 00:35:45.410 { 00:35:45.410 "name": "superblock_version", 00:35:45.410 "value": 5, 00:35:45.410 "read-only": true 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "name": "base_device", 00:35:45.410 "bands": [ 00:35:45.410 { 00:35:45.410 "id": 0, 00:35:45.410 "state": "FREE", 00:35:45.410 "validity": 0.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 1, 00:35:45.410 "state": "FREE", 00:35:45.410 "validity": 0.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 2, 00:35:45.410 "state": "FREE", 00:35:45.410 "validity": 0.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 3, 00:35:45.410 "state": "FREE", 00:35:45.410 "validity": 0.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 4, 00:35:45.410 "state": "FREE", 00:35:45.410 "validity": 0.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 5, 00:35:45.410 "state": "FREE", 00:35:45.410 "validity": 0.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 6, 00:35:45.410 "state": "FREE", 00:35:45.410 "validity": 0.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 7, 00:35:45.410 "state": "FREE", 00:35:45.410 "validity": 0.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 8, 00:35:45.410 "state": "FREE", 00:35:45.410 "validity": 0.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 9, 00:35:45.410 "state": "FREE", 00:35:45.410 "validity": 0.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 10, 00:35:45.410 "state": "FREE", 00:35:45.410 "validity": 0.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 11, 00:35:45.410 "state": "FREE", 00:35:45.410 "validity": 0.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 12, 00:35:45.410 "state": "FREE", 00:35:45.410 "validity": 0.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 13, 00:35:45.410 "state": "FREE", 00:35:45.410 "validity": 0.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 14, 00:35:45.410 "state": "FREE", 00:35:45.410 "validity": 0.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 15, 00:35:45.410 "state": "FREE", 00:35:45.410 "validity": 0.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 16, 00:35:45.410 "state": "FREE", 00:35:45.410 "validity": 0.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 17, 00:35:45.410 "state": "FREE", 00:35:45.410 "validity": 0.0 00:35:45.410 } 00:35:45.410 ], 00:35:45.410 "read-only": true 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "name": "cache_device", 00:35:45.410 "type": "bdev", 00:35:45.410 "chunks": [ 00:35:45.410 { 00:35:45.410 "id": 0, 00:35:45.410 "state": "INACTIVE", 00:35:45.410 "utilization": 0.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 1, 00:35:45.410 "state": "CLOSED", 00:35:45.410 "utilization": 1.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 2, 00:35:45.410 "state": "CLOSED", 00:35:45.410 "utilization": 1.0 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 3, 00:35:45.410 "state": "OPEN", 00:35:45.410 "utilization": 0.001953125 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "id": 4, 00:35:45.410 "state": "OPEN", 00:35:45.410 "utilization": 0.0 00:35:45.410 } 00:35:45.410 ], 00:35:45.410 "read-only": true 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "name": "verbose_mode", 00:35:45.410 "value": true, 00:35:45.410 "unit": "", 00:35:45.410 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:35:45.410 }, 00:35:45.410 { 00:35:45.410 "name": "prep_upgrade_on_shutdown", 00:35:45.410 "value": false, 00:35:45.410 "unit": "", 00:35:45.410 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:35:45.410 } 00:35:45.410 ] 00:35:45.410 } 00:35:45.410 13:54:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:35:45.670 [2024-11-20 13:54:57.524760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:45.670 [2024-11-20 13:54:57.524812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:45.670 [2024-11-20 13:54:57.524829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:35:45.670 [2024-11-20 13:54:57.524841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:45.670 [2024-11-20 13:54:57.524867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:45.670 [2024-11-20 13:54:57.524878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:45.670 [2024-11-20 13:54:57.524889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:45.670 [2024-11-20 13:54:57.524899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:45.670 [2024-11-20 13:54:57.524918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:45.670 [2024-11-20 13:54:57.524929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:45.670 [2024-11-20 13:54:57.524939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:35:45.670 [2024-11-20 13:54:57.524949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:45.670 [2024-11-20 13:54:57.525009] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.237 ms, result 0 00:35:45.670 true 00:35:45.670 13:54:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:35:45.670 13:54:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:35:45.670 13:54:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:45.929 13:54:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:35:45.930 13:54:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:35:45.930 13:54:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:35:46.189 [2024-11-20 13:54:57.960448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:46.189 [2024-11-20 13:54:57.960505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:46.189 [2024-11-20 13:54:57.960523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:35:46.189 [2024-11-20 13:54:57.960533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.189 [2024-11-20 13:54:57.960559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:46.189 [2024-11-20 13:54:57.960570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:46.189 [2024-11-20 13:54:57.960581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:46.189 [2024-11-20 13:54:57.960590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.189 [2024-11-20 13:54:57.960629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:46.189 [2024-11-20 13:54:57.960640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:46.189 [2024-11-20 13:54:57.960651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:35:46.189 [2024-11-20 13:54:57.960661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.189 [2024-11-20 13:54:57.960724] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.265 ms, result 0 00:35:46.189 true 00:35:46.189 13:54:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:46.449 { 00:35:46.449 "name": "ftl", 00:35:46.449 "properties": [ 00:35:46.449 { 00:35:46.449 "name": "superblock_version", 00:35:46.449 "value": 5, 00:35:46.449 "read-only": true 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "name": "base_device", 00:35:46.449 "bands": [ 00:35:46.449 { 00:35:46.449 "id": 0, 00:35:46.449 "state": "FREE", 00:35:46.449 "validity": 0.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 1, 00:35:46.449 "state": "FREE", 00:35:46.449 "validity": 0.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 2, 00:35:46.449 "state": "FREE", 00:35:46.449 "validity": 0.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 3, 00:35:46.449 "state": "FREE", 00:35:46.449 "validity": 0.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 4, 00:35:46.449 "state": "FREE", 00:35:46.449 "validity": 0.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 5, 00:35:46.449 "state": "FREE", 00:35:46.449 "validity": 0.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 6, 00:35:46.449 "state": "FREE", 00:35:46.449 "validity": 0.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 7, 00:35:46.449 "state": "FREE", 00:35:46.449 "validity": 0.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 8, 00:35:46.449 "state": "FREE", 00:35:46.449 "validity": 0.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 9, 00:35:46.449 "state": "FREE", 00:35:46.449 "validity": 0.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 10, 00:35:46.449 "state": "FREE", 00:35:46.449 "validity": 0.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 11, 00:35:46.449 "state": "FREE", 00:35:46.449 "validity": 0.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 12, 00:35:46.449 "state": "FREE", 00:35:46.449 "validity": 0.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 13, 00:35:46.449 "state": "FREE", 00:35:46.449 "validity": 0.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 14, 00:35:46.449 "state": "FREE", 00:35:46.449 "validity": 0.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 15, 00:35:46.449 "state": "FREE", 00:35:46.449 "validity": 0.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 16, 00:35:46.449 "state": "FREE", 00:35:46.449 "validity": 0.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 17, 00:35:46.449 "state": "FREE", 00:35:46.449 "validity": 0.0 00:35:46.449 } 00:35:46.449 ], 00:35:46.449 "read-only": true 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "name": "cache_device", 00:35:46.449 "type": "bdev", 00:35:46.449 "chunks": [ 00:35:46.449 { 00:35:46.449 "id": 0, 00:35:46.449 "state": "INACTIVE", 00:35:46.449 "utilization": 0.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 1, 00:35:46.449 "state": "CLOSED", 00:35:46.449 "utilization": 1.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 2, 00:35:46.449 "state": "CLOSED", 00:35:46.449 "utilization": 1.0 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 3, 00:35:46.449 "state": "OPEN", 00:35:46.449 "utilization": 0.001953125 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "id": 4, 00:35:46.449 "state": "OPEN", 00:35:46.449 "utilization": 0.0 00:35:46.449 } 00:35:46.449 ], 00:35:46.449 "read-only": true 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "name": "verbose_mode", 00:35:46.449 "value": true, 00:35:46.449 "unit": "", 00:35:46.449 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:35:46.449 }, 00:35:46.449 { 00:35:46.449 "name": "prep_upgrade_on_shutdown", 00:35:46.449 "value": true, 00:35:46.449 "unit": "", 00:35:46.449 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:35:46.449 } 00:35:46.449 ] 00:35:46.449 } 00:35:46.449 13:54:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:35:46.449 13:54:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83453 ]] 00:35:46.449 13:54:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83453 00:35:46.449 13:54:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83453 ']' 00:35:46.449 13:54:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83453 00:35:46.449 13:54:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:35:46.449 13:54:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:46.449 13:54:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83453 00:35:46.449 13:54:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:46.449 13:54:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:46.449 13:54:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83453' 00:35:46.449 killing process with pid 83453 00:35:46.449 13:54:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83453 00:35:46.449 13:54:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83453 00:35:47.831 [2024-11-20 13:54:59.359663] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:35:47.831 [2024-11-20 13:54:59.380154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:47.831 [2024-11-20 13:54:59.380206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:35:47.831 [2024-11-20 13:54:59.380222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:35:47.831 [2024-11-20 13:54:59.380234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:47.831 [2024-11-20 13:54:59.380258] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:35:47.831 [2024-11-20 13:54:59.384415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:47.831 [2024-11-20 13:54:59.384443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:35:47.831 [2024-11-20 13:54:59.384456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.147 ms 00:35:47.831 [2024-11-20 13:54:59.384466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.441 [2024-11-20 13:55:06.221183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.441 [2024-11-20 13:55:06.221242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:35:54.441 [2024-11-20 13:55:06.221259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6847.765 ms 00:35:54.441 [2024-11-20 13:55:06.221275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.441 [2024-11-20 13:55:06.222479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.441 [2024-11-20 13:55:06.222513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:35:54.441 [2024-11-20 13:55:06.222525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.187 ms 00:35:54.441 [2024-11-20 13:55:06.222536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.441 [2024-11-20 13:55:06.223443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.441 [2024-11-20 13:55:06.223459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:35:54.441 [2024-11-20 13:55:06.223471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.891 ms 00:35:54.441 [2024-11-20 13:55:06.223483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.441 [2024-11-20 13:55:06.238515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.441 [2024-11-20 13:55:06.238552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:35:54.441 [2024-11-20 13:55:06.238566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.011 ms 00:35:54.441 [2024-11-20 13:55:06.238577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.441 [2024-11-20 13:55:06.247699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.441 [2024-11-20 13:55:06.247738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:35:54.441 [2024-11-20 13:55:06.247752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.114 ms 00:35:54.441 [2024-11-20 13:55:06.247779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.441 [2024-11-20 13:55:06.247882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.441 [2024-11-20 13:55:06.247900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:35:54.441 [2024-11-20 13:55:06.247912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:35:54.441 [2024-11-20 13:55:06.247927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.441 [2024-11-20 13:55:06.262329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.441 [2024-11-20 13:55:06.262364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:35:54.441 [2024-11-20 13:55:06.262377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.391 ms 00:35:54.441 [2024-11-20 13:55:06.262387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.441 [2024-11-20 13:55:06.277132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.441 [2024-11-20 13:55:06.277164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:35:54.441 [2024-11-20 13:55:06.277175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.749 ms 00:35:54.441 [2024-11-20 13:55:06.277185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.441 [2024-11-20 13:55:06.291606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.441 [2024-11-20 13:55:06.291748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:35:54.441 [2024-11-20 13:55:06.291769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.425 ms 00:35:54.441 [2024-11-20 13:55:06.291779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.441 [2024-11-20 13:55:06.306377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.441 [2024-11-20 13:55:06.306409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:35:54.441 [2024-11-20 13:55:06.306422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.555 ms 00:35:54.441 [2024-11-20 13:55:06.306432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.441 [2024-11-20 13:55:06.306452] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:35:54.441 [2024-11-20 13:55:06.306469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:35:54.441 [2024-11-20 13:55:06.306481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:35:54.441 [2024-11-20 13:55:06.306505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:35:54.441 [2024-11-20 13:55:06.306516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:54.441 [2024-11-20 13:55:06.306528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:54.441 [2024-11-20 13:55:06.306539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:54.441 [2024-11-20 13:55:06.306550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:54.441 [2024-11-20 13:55:06.306560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:54.441 [2024-11-20 13:55:06.306571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:54.441 [2024-11-20 13:55:06.306582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:54.441 [2024-11-20 13:55:06.306593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:54.441 [2024-11-20 13:55:06.306617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:54.441 [2024-11-20 13:55:06.306627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:54.441 [2024-11-20 13:55:06.306638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:54.441 [2024-11-20 13:55:06.306648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:54.441 [2024-11-20 13:55:06.306658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:54.441 [2024-11-20 13:55:06.306669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:54.441 [2024-11-20 13:55:06.306680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:54.441 [2024-11-20 13:55:06.306693] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:35:54.441 [2024-11-20 13:55:06.306714] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 70390623-5db4-4a53-b61f-90066da92760 00:35:54.441 [2024-11-20 13:55:06.306725] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:35:54.441 [2024-11-20 13:55:06.306735] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:35:54.441 [2024-11-20 13:55:06.306746] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:35:54.441 [2024-11-20 13:55:06.306756] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:35:54.441 [2024-11-20 13:55:06.306766] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:35:54.441 [2024-11-20 13:55:06.306776] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:35:54.441 [2024-11-20 13:55:06.306790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:35:54.441 [2024-11-20 13:55:06.306799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:35:54.441 [2024-11-20 13:55:06.306808] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:35:54.441 [2024-11-20 13:55:06.306818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.441 [2024-11-20 13:55:06.306829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:35:54.442 [2024-11-20 13:55:06.306844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.367 ms 00:35:54.442 [2024-11-20 13:55:06.306854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.442 [2024-11-20 13:55:06.327066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.442 [2024-11-20 13:55:06.327101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:35:54.442 [2024-11-20 13:55:06.327114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.220 ms 00:35:54.442 [2024-11-20 13:55:06.327125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.442 [2024-11-20 13:55:06.327714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.442 [2024-11-20 13:55:06.327728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:35:54.442 [2024-11-20 13:55:06.327739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.551 ms 00:35:54.442 [2024-11-20 13:55:06.327750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.442 [2024-11-20 13:55:06.394687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:54.442 [2024-11-20 13:55:06.394737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:35:54.442 [2024-11-20 13:55:06.394751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:54.442 [2024-11-20 13:55:06.394767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.442 [2024-11-20 13:55:06.394811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:54.442 [2024-11-20 13:55:06.394823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:35:54.442 [2024-11-20 13:55:06.394834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:54.442 [2024-11-20 13:55:06.394844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.442 [2024-11-20 13:55:06.394933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:54.442 [2024-11-20 13:55:06.394948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:35:54.442 [2024-11-20 13:55:06.394959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:54.442 [2024-11-20 13:55:06.394969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.442 [2024-11-20 13:55:06.394993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:54.442 [2024-11-20 13:55:06.395004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:35:54.442 [2024-11-20 13:55:06.395014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:54.442 [2024-11-20 13:55:06.395025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.701 [2024-11-20 13:55:06.520017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:54.701 [2024-11-20 13:55:06.520072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:35:54.701 [2024-11-20 13:55:06.520088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:54.701 [2024-11-20 13:55:06.520106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.701 [2024-11-20 13:55:06.620929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:54.701 [2024-11-20 13:55:06.620991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:35:54.701 [2024-11-20 13:55:06.621006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:54.701 [2024-11-20 13:55:06.621018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.701 [2024-11-20 13:55:06.621137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:54.701 [2024-11-20 13:55:06.621150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:35:54.701 [2024-11-20 13:55:06.621161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:54.701 [2024-11-20 13:55:06.621172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.702 [2024-11-20 13:55:06.621224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:54.702 [2024-11-20 13:55:06.621244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:35:54.702 [2024-11-20 13:55:06.621254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:54.702 [2024-11-20 13:55:06.621265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.702 [2024-11-20 13:55:06.621377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:54.702 [2024-11-20 13:55:06.621400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:35:54.702 [2024-11-20 13:55:06.621411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:54.702 [2024-11-20 13:55:06.621421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.702 [2024-11-20 13:55:06.621460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:54.702 [2024-11-20 13:55:06.621473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:35:54.702 [2024-11-20 13:55:06.621488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:54.702 [2024-11-20 13:55:06.621498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.702 [2024-11-20 13:55:06.621540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:54.702 [2024-11-20 13:55:06.621550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:35:54.702 [2024-11-20 13:55:06.621561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:54.702 [2024-11-20 13:55:06.621571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.702 [2024-11-20 13:55:06.621639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:54.702 [2024-11-20 13:55:06.621657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:35:54.702 [2024-11-20 13:55:06.621668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:54.702 [2024-11-20 13:55:06.621678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.702 [2024-11-20 13:55:06.621818] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7253.393 ms, result 0 00:35:59.980 13:55:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:35:59.980 13:55:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:35:59.980 13:55:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:35:59.980 13:55:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:35:59.980 13:55:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:59.980 13:55:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84024 00:35:59.980 13:55:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:59.980 13:55:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:35:59.980 13:55:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84024 00:35:59.980 13:55:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84024 ']' 00:35:59.980 13:55:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:59.980 13:55:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:59.980 13:55:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:59.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:59.980 13:55:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:59.980 13:55:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:59.980 [2024-11-20 13:55:11.081709] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:35:59.980 [2024-11-20 13:55:11.082029] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84024 ] 00:35:59.980 [2024-11-20 13:55:11.280333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.980 [2024-11-20 13:55:11.389421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:00.549 [2024-11-20 13:55:12.347186] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:00.549 [2024-11-20 13:55:12.347397] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:00.549 [2024-11-20 13:55:12.493979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.549 [2024-11-20 13:55:12.494187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:36:00.549 [2024-11-20 13:55:12.494349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:36:00.549 [2024-11-20 13:55:12.494392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.549 [2024-11-20 13:55:12.494479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.549 [2024-11-20 13:55:12.494494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:00.549 [2024-11-20 13:55:12.494507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:36:00.549 [2024-11-20 13:55:12.494517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.549 [2024-11-20 13:55:12.494544] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:36:00.549 [2024-11-20 13:55:12.495663] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:36:00.549 [2024-11-20 13:55:12.495692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.549 [2024-11-20 13:55:12.495703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:00.549 [2024-11-20 13:55:12.495714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.155 ms 00:36:00.549 [2024-11-20 13:55:12.495725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.549 [2024-11-20 13:55:12.497325] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:36:00.810 [2024-11-20 13:55:12.516610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.810 [2024-11-20 13:55:12.516646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:36:00.810 [2024-11-20 13:55:12.516665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.317 ms 00:36:00.810 [2024-11-20 13:55:12.516693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.810 [2024-11-20 13:55:12.516755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.810 [2024-11-20 13:55:12.516769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:36:00.810 [2024-11-20 13:55:12.516780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:36:00.810 [2024-11-20 13:55:12.516791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.810 [2024-11-20 13:55:12.523485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.810 [2024-11-20 13:55:12.523516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:00.810 [2024-11-20 13:55:12.523527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.626 ms 00:36:00.810 [2024-11-20 13:55:12.523539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.810 [2024-11-20 13:55:12.523617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.810 [2024-11-20 13:55:12.523633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:00.810 [2024-11-20 13:55:12.523644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:36:00.810 [2024-11-20 13:55:12.523654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.810 [2024-11-20 13:55:12.523696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.810 [2024-11-20 13:55:12.523708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:36:00.810 [2024-11-20 13:55:12.523723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:36:00.810 [2024-11-20 13:55:12.523733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.810 [2024-11-20 13:55:12.523760] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:36:00.810 [2024-11-20 13:55:12.528482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.810 [2024-11-20 13:55:12.528512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:00.810 [2024-11-20 13:55:12.528525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.735 ms 00:36:00.810 [2024-11-20 13:55:12.528539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.810 [2024-11-20 13:55:12.528567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.810 [2024-11-20 13:55:12.528577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:36:00.810 [2024-11-20 13:55:12.528588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:00.810 [2024-11-20 13:55:12.528613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.810 [2024-11-20 13:55:12.528669] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:36:00.810 [2024-11-20 13:55:12.528693] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:36:00.810 [2024-11-20 13:55:12.528731] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:36:00.810 [2024-11-20 13:55:12.528750] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:36:00.810 [2024-11-20 13:55:12.528841] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:36:00.810 [2024-11-20 13:55:12.528855] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:36:00.810 [2024-11-20 13:55:12.528867] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:36:00.810 [2024-11-20 13:55:12.528881] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:36:00.810 [2024-11-20 13:55:12.528893] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:36:00.810 [2024-11-20 13:55:12.528908] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:36:00.810 [2024-11-20 13:55:12.528918] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:36:00.810 [2024-11-20 13:55:12.528928] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:36:00.810 [2024-11-20 13:55:12.528938] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:36:00.810 [2024-11-20 13:55:12.528949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.810 [2024-11-20 13:55:12.528959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:36:00.810 [2024-11-20 13:55:12.528970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.284 ms 00:36:00.810 [2024-11-20 13:55:12.528979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.810 [2024-11-20 13:55:12.529055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.810 [2024-11-20 13:55:12.529067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:36:00.810 [2024-11-20 13:55:12.529078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:36:00.810 [2024-11-20 13:55:12.529091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.810 [2024-11-20 13:55:12.529180] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:36:00.810 [2024-11-20 13:55:12.529192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:36:00.810 [2024-11-20 13:55:12.529204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:00.810 [2024-11-20 13:55:12.529215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:00.810 [2024-11-20 13:55:12.529225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:36:00.810 [2024-11-20 13:55:12.529235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:36:00.810 [2024-11-20 13:55:12.529245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:36:00.810 [2024-11-20 13:55:12.529254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:36:00.810 [2024-11-20 13:55:12.529265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:36:00.811 [2024-11-20 13:55:12.529275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:00.811 [2024-11-20 13:55:12.529288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:36:00.811 [2024-11-20 13:55:12.529298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:36:00.811 [2024-11-20 13:55:12.529308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:00.811 [2024-11-20 13:55:12.529318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:36:00.811 [2024-11-20 13:55:12.529328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:36:00.811 [2024-11-20 13:55:12.529338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:00.811 [2024-11-20 13:55:12.529348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:36:00.811 [2024-11-20 13:55:12.529358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:36:00.811 [2024-11-20 13:55:12.529367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:00.811 [2024-11-20 13:55:12.529376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:36:00.811 [2024-11-20 13:55:12.529386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:36:00.811 [2024-11-20 13:55:12.529395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:00.811 [2024-11-20 13:55:12.529404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:36:00.811 [2024-11-20 13:55:12.529414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:36:00.811 [2024-11-20 13:55:12.529424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:00.811 [2024-11-20 13:55:12.529444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:36:00.811 [2024-11-20 13:55:12.529454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:36:00.811 [2024-11-20 13:55:12.529463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:00.811 [2024-11-20 13:55:12.529472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:36:00.811 [2024-11-20 13:55:12.529482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:36:00.811 [2024-11-20 13:55:12.529492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:00.811 [2024-11-20 13:55:12.529502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:36:00.811 [2024-11-20 13:55:12.529511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:36:00.811 [2024-11-20 13:55:12.529521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:00.811 [2024-11-20 13:55:12.529530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:36:00.811 [2024-11-20 13:55:12.529539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:36:00.811 [2024-11-20 13:55:12.529549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:00.811 [2024-11-20 13:55:12.529559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:36:00.811 [2024-11-20 13:55:12.529569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:36:00.811 [2024-11-20 13:55:12.529578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:00.811 [2024-11-20 13:55:12.529588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:36:00.811 [2024-11-20 13:55:12.529615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:36:00.811 [2024-11-20 13:55:12.529628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:00.811 [2024-11-20 13:55:12.529637] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:36:00.811 [2024-11-20 13:55:12.529649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:36:00.811 [2024-11-20 13:55:12.529660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:00.811 [2024-11-20 13:55:12.529670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:00.811 [2024-11-20 13:55:12.529684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:36:00.811 [2024-11-20 13:55:12.529694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:36:00.811 [2024-11-20 13:55:12.529704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:36:00.811 [2024-11-20 13:55:12.529714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:36:00.811 [2024-11-20 13:55:12.529723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:36:00.811 [2024-11-20 13:55:12.529748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:36:00.811 [2024-11-20 13:55:12.529761] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:36:00.811 [2024-11-20 13:55:12.529774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:00.811 [2024-11-20 13:55:12.529786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:36:00.811 [2024-11-20 13:55:12.529796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:36:00.811 [2024-11-20 13:55:12.529807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:36:00.811 [2024-11-20 13:55:12.529837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:36:00.811 [2024-11-20 13:55:12.529848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:36:00.811 [2024-11-20 13:55:12.529859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:36:00.811 [2024-11-20 13:55:12.529869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:36:00.811 [2024-11-20 13:55:12.529879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:36:00.811 [2024-11-20 13:55:12.529890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:36:00.811 [2024-11-20 13:55:12.529900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:36:00.811 [2024-11-20 13:55:12.529911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:36:00.811 [2024-11-20 13:55:12.529921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:36:00.811 [2024-11-20 13:55:12.529931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:36:00.811 [2024-11-20 13:55:12.529943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:36:00.811 [2024-11-20 13:55:12.529954] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:36:00.811 [2024-11-20 13:55:12.529965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:00.811 [2024-11-20 13:55:12.529977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:00.811 [2024-11-20 13:55:12.529987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:36:00.811 [2024-11-20 13:55:12.529997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:36:00.811 [2024-11-20 13:55:12.530009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:36:00.811 [2024-11-20 13:55:12.530021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.811 [2024-11-20 13:55:12.530032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:36:00.811 [2024-11-20 13:55:12.530043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.896 ms 00:36:00.811 [2024-11-20 13:55:12.530052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.811 [2024-11-20 13:55:12.530100] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:36:00.811 [2024-11-20 13:55:12.530114] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:36:04.104 [2024-11-20 13:55:15.599959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.104 [2024-11-20 13:55:15.600031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:36:04.104 [2024-11-20 13:55:15.600050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3074.839 ms 00:36:04.104 [2024-11-20 13:55:15.600061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.104 [2024-11-20 13:55:15.638902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.104 [2024-11-20 13:55:15.638964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:04.104 [2024-11-20 13:55:15.638981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.609 ms 00:36:04.104 [2024-11-20 13:55:15.638993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.104 [2024-11-20 13:55:15.639126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.104 [2024-11-20 13:55:15.639145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:36:04.104 [2024-11-20 13:55:15.639159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:36:04.104 [2024-11-20 13:55:15.639169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.104 [2024-11-20 13:55:15.686463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.104 [2024-11-20 13:55:15.686522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:04.104 [2024-11-20 13:55:15.686538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.323 ms 00:36:04.104 [2024-11-20 13:55:15.686554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.104 [2024-11-20 13:55:15.686635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.104 [2024-11-20 13:55:15.686647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:04.104 [2024-11-20 13:55:15.686659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:04.104 [2024-11-20 13:55:15.686669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.104 [2024-11-20 13:55:15.687172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.104 [2024-11-20 13:55:15.687192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:04.104 [2024-11-20 13:55:15.687203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.412 ms 00:36:04.104 [2024-11-20 13:55:15.687213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.104 [2024-11-20 13:55:15.687263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.104 [2024-11-20 13:55:15.687275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:04.104 [2024-11-20 13:55:15.687287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:36:04.104 [2024-11-20 13:55:15.687297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.104 [2024-11-20 13:55:15.708971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.104 [2024-11-20 13:55:15.709021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:04.104 [2024-11-20 13:55:15.709036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.684 ms 00:36:04.104 [2024-11-20 13:55:15.709048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.104 [2024-11-20 13:55:15.745595] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:36:04.104 [2024-11-20 13:55:15.745645] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:36:04.104 [2024-11-20 13:55:15.745662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.104 [2024-11-20 13:55:15.745689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:36:04.104 [2024-11-20 13:55:15.745702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.525 ms 00:36:04.104 [2024-11-20 13:55:15.745713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.104 [2024-11-20 13:55:15.765692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.104 [2024-11-20 13:55:15.765733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:36:04.104 [2024-11-20 13:55:15.765748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.955 ms 00:36:04.104 [2024-11-20 13:55:15.765760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.104 [2024-11-20 13:55:15.783781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.104 [2024-11-20 13:55:15.783816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:36:04.104 [2024-11-20 13:55:15.783831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.002 ms 00:36:04.104 [2024-11-20 13:55:15.783841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.104 [2024-11-20 13:55:15.801822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.104 [2024-11-20 13:55:15.801856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:36:04.104 [2024-11-20 13:55:15.801869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.967 ms 00:36:04.104 [2024-11-20 13:55:15.801895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.104 [2024-11-20 13:55:15.802781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.104 [2024-11-20 13:55:15.802808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:36:04.104 [2024-11-20 13:55:15.802820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.780 ms 00:36:04.104 [2024-11-20 13:55:15.802830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.104 [2024-11-20 13:55:15.887608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.104 [2024-11-20 13:55:15.887814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:36:04.104 [2024-11-20 13:55:15.887841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 84.882 ms 00:36:04.104 [2024-11-20 13:55:15.887853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.104 [2024-11-20 13:55:15.898855] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:36:04.104 [2024-11-20 13:55:15.899920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.104 [2024-11-20 13:55:15.899950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:36:04.104 [2024-11-20 13:55:15.899964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.034 ms 00:36:04.104 [2024-11-20 13:55:15.899975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.104 [2024-11-20 13:55:15.900069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.104 [2024-11-20 13:55:15.900086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:36:04.104 [2024-11-20 13:55:15.900098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:36:04.104 [2024-11-20 13:55:15.900109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.104 [2024-11-20 13:55:15.900188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.104 [2024-11-20 13:55:15.900201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:36:04.104 [2024-11-20 13:55:15.900213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:36:04.104 [2024-11-20 13:55:15.900224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.104 [2024-11-20 13:55:15.900251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.104 [2024-11-20 13:55:15.900262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:36:04.104 [2024-11-20 13:55:15.900277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:04.104 [2024-11-20 13:55:15.900287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.104 [2024-11-20 13:55:15.900321] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:36:04.104 [2024-11-20 13:55:15.900334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.105 [2024-11-20 13:55:15.900344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:36:04.105 [2024-11-20 13:55:15.900355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:36:04.105 [2024-11-20 13:55:15.900365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.105 [2024-11-20 13:55:15.936520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.105 [2024-11-20 13:55:15.936567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:36:04.105 [2024-11-20 13:55:15.936582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.189 ms 00:36:04.105 [2024-11-20 13:55:15.936594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.105 [2024-11-20 13:55:15.936691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.105 [2024-11-20 13:55:15.936704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:36:04.105 [2024-11-20 13:55:15.936715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:36:04.105 [2024-11-20 13:55:15.936725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.105 [2024-11-20 13:55:15.937888] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3449.028 ms, result 0 00:36:04.105 [2024-11-20 13:55:15.952869] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:04.105 [2024-11-20 13:55:15.968857] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:36:04.105 [2024-11-20 13:55:15.977798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:04.105 13:55:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:04.105 13:55:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:36:04.105 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:04.105 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:36:04.105 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:36:04.364 [2024-11-20 13:55:16.221453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.364 [2024-11-20 13:55:16.221515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:36:04.364 [2024-11-20 13:55:16.221532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:36:04.364 [2024-11-20 13:55:16.221547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.364 [2024-11-20 13:55:16.221577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.364 [2024-11-20 13:55:16.221590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:36:04.364 [2024-11-20 13:55:16.221617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:36:04.364 [2024-11-20 13:55:16.221629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.364 [2024-11-20 13:55:16.221651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:04.364 [2024-11-20 13:55:16.221662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:36:04.364 [2024-11-20 13:55:16.221673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:36:04.365 [2024-11-20 13:55:16.221683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:04.365 [2024-11-20 13:55:16.221750] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.288 ms, result 0 00:36:04.365 true 00:36:04.365 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:36:04.624 { 00:36:04.624 "name": "ftl", 00:36:04.624 "properties": [ 00:36:04.624 { 00:36:04.624 "name": "superblock_version", 00:36:04.624 "value": 5, 00:36:04.624 "read-only": true 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "name": "base_device", 00:36:04.624 "bands": [ 00:36:04.624 { 00:36:04.624 "id": 0, 00:36:04.624 "state": "CLOSED", 00:36:04.624 "validity": 1.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 1, 00:36:04.624 "state": "CLOSED", 00:36:04.624 "validity": 1.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 2, 00:36:04.624 "state": "CLOSED", 00:36:04.624 "validity": 0.007843137254901933 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 3, 00:36:04.624 "state": "FREE", 00:36:04.624 "validity": 0.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 4, 00:36:04.624 "state": "FREE", 00:36:04.624 "validity": 0.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 5, 00:36:04.624 "state": "FREE", 00:36:04.624 "validity": 0.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 6, 00:36:04.624 "state": "FREE", 00:36:04.624 "validity": 0.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 7, 00:36:04.624 "state": "FREE", 00:36:04.624 "validity": 0.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 8, 00:36:04.624 "state": "FREE", 00:36:04.624 "validity": 0.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 9, 00:36:04.624 "state": "FREE", 00:36:04.624 "validity": 0.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 10, 00:36:04.624 "state": "FREE", 00:36:04.624 "validity": 0.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 11, 00:36:04.624 "state": "FREE", 00:36:04.624 "validity": 0.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 12, 00:36:04.624 "state": "FREE", 00:36:04.624 "validity": 0.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 13, 00:36:04.624 "state": "FREE", 00:36:04.624 "validity": 0.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 14, 00:36:04.624 "state": "FREE", 00:36:04.624 "validity": 0.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 15, 00:36:04.624 "state": "FREE", 00:36:04.624 "validity": 0.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 16, 00:36:04.624 "state": "FREE", 00:36:04.624 "validity": 0.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 17, 00:36:04.624 "state": "FREE", 00:36:04.624 "validity": 0.0 00:36:04.624 } 00:36:04.624 ], 00:36:04.624 "read-only": true 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "name": "cache_device", 00:36:04.624 "type": "bdev", 00:36:04.624 "chunks": [ 00:36:04.624 { 00:36:04.624 "id": 0, 00:36:04.624 "state": "INACTIVE", 00:36:04.624 "utilization": 0.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 1, 00:36:04.624 "state": "OPEN", 00:36:04.624 "utilization": 0.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 2, 00:36:04.624 "state": "OPEN", 00:36:04.624 "utilization": 0.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 3, 00:36:04.624 "state": "FREE", 00:36:04.624 "utilization": 0.0 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "id": 4, 00:36:04.624 "state": "FREE", 00:36:04.624 "utilization": 0.0 00:36:04.624 } 00:36:04.624 ], 00:36:04.624 "read-only": true 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "name": "verbose_mode", 00:36:04.624 "value": true, 00:36:04.624 "unit": "", 00:36:04.624 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:36:04.624 }, 00:36:04.624 { 00:36:04.624 "name": "prep_upgrade_on_shutdown", 00:36:04.624 "value": false, 00:36:04.624 "unit": "", 00:36:04.624 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:36:04.624 } 00:36:04.624 ] 00:36:04.624 } 00:36:04.624 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:36:04.625 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:36:04.625 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:36:04.884 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:36:04.884 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:36:04.884 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:36:04.884 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:36:04.884 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:36:05.144 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:36:05.144 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:36:05.144 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:36:05.144 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:36:05.144 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:36:05.144 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:05.144 Validate MD5 checksum, iteration 1 00:36:05.144 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:36:05.144 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:05.144 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:05.144 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:05.144 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:05.144 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:05.144 13:55:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:05.144 [2024-11-20 13:55:16.973442] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:36:05.144 [2024-11-20 13:55:16.973580] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84095 ] 00:36:05.404 [2024-11-20 13:55:17.155026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:05.404 [2024-11-20 13:55:17.277551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:07.310  [2024-11-20T13:55:19.525Z] Copying: 672/1024 [MB] (672 MBps) [2024-11-20T13:55:21.446Z] Copying: 1024/1024 [MB] (average 665 MBps) 00:36:09.489 00:36:09.489 13:55:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:36:09.489 13:55:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:10.871 13:55:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:10.871 13:55:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=682a7c4557960e23a7e1054c4957b380 00:36:10.871 13:55:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 682a7c4557960e23a7e1054c4957b380 != \6\8\2\a\7\c\4\5\5\7\9\6\0\e\2\3\a\7\e\1\0\5\4\c\4\9\5\7\b\3\8\0 ]] 00:36:10.871 13:55:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:10.871 13:55:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:10.871 13:55:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:36:10.871 Validate MD5 checksum, iteration 2 00:36:10.871 13:55:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:10.871 13:55:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:10.871 13:55:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:10.871 13:55:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:10.871 13:55:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:10.871 13:55:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:11.131 [2024-11-20 13:55:22.845868] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:36:11.131 [2024-11-20 13:55:22.846246] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84161 ] 00:36:11.131 [2024-11-20 13:55:23.042786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:11.390 [2024-11-20 13:55:23.157729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:13.299  [2024-11-20T13:55:25.516Z] Copying: 687/1024 [MB] (687 MBps) [2024-11-20T13:55:28.806Z] Copying: 1024/1024 [MB] (average 643 MBps) 00:36:16.849 00:36:16.849 13:55:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:36:16.849 13:55:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=dbfdfc1c02acec90456ae5547df2be1e 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ dbfdfc1c02acec90456ae5547df2be1e != \d\b\f\d\f\c\1\c\0\2\a\c\e\c\9\0\4\5\6\a\e\5\5\4\7\d\f\2\b\e\1\e ]] 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84024 ]] 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84024 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84260 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84260 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84260 ']' 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:18.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:18.770 13:55:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:18.770 [2024-11-20 13:55:30.538956] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:36:18.770 [2024-11-20 13:55:30.539090] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84260 ] 00:36:18.770 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84024 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:36:18.770 [2024-11-20 13:55:30.714970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:19.029 [2024-11-20 13:55:30.828178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:19.964 [2024-11-20 13:55:31.776941] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:19.964 [2024-11-20 13:55:31.777016] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:20.223 [2024-11-20 13:55:31.923806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.223 [2024-11-20 13:55:31.923872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:36:20.223 [2024-11-20 13:55:31.923890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:36:20.223 [2024-11-20 13:55:31.923902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.223 [2024-11-20 13:55:31.923970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.223 [2024-11-20 13:55:31.923984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:20.223 [2024-11-20 13:55:31.923996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:36:20.223 [2024-11-20 13:55:31.924007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.223 [2024-11-20 13:55:31.924032] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:36:20.223 [2024-11-20 13:55:31.925012] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:36:20.223 [2024-11-20 13:55:31.925040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.223 [2024-11-20 13:55:31.925051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:20.223 [2024-11-20 13:55:31.925063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.014 ms 00:36:20.223 [2024-11-20 13:55:31.925074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.223 [2024-11-20 13:55:31.925630] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:36:20.223 [2024-11-20 13:55:31.949264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.223 [2024-11-20 13:55:31.949308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:36:20.223 [2024-11-20 13:55:31.949325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.672 ms 00:36:20.223 [2024-11-20 13:55:31.949337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.223 [2024-11-20 13:55:31.963478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.223 [2024-11-20 13:55:31.963652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:36:20.223 [2024-11-20 13:55:31.963681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:36:20.223 [2024-11-20 13:55:31.963692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.223 [2024-11-20 13:55:31.964207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.223 [2024-11-20 13:55:31.964224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:20.223 [2024-11-20 13:55:31.964237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.419 ms 00:36:20.223 [2024-11-20 13:55:31.964248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.223 [2024-11-20 13:55:31.964311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.223 [2024-11-20 13:55:31.964326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:20.223 [2024-11-20 13:55:31.964337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:36:20.223 [2024-11-20 13:55:31.964348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.223 [2024-11-20 13:55:31.964379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.223 [2024-11-20 13:55:31.964390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:36:20.223 [2024-11-20 13:55:31.964401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:36:20.224 [2024-11-20 13:55:31.964411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.224 [2024-11-20 13:55:31.964438] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:36:20.224 [2024-11-20 13:55:31.968568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.224 [2024-11-20 13:55:31.968605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:20.224 [2024-11-20 13:55:31.968618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.142 ms 00:36:20.224 [2024-11-20 13:55:31.968629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.224 [2024-11-20 13:55:31.968661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.224 [2024-11-20 13:55:31.968673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:36:20.224 [2024-11-20 13:55:31.968684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:20.224 [2024-11-20 13:55:31.968694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.224 [2024-11-20 13:55:31.968736] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:36:20.224 [2024-11-20 13:55:31.968760] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:36:20.224 [2024-11-20 13:55:31.968795] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:36:20.224 [2024-11-20 13:55:31.968817] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:36:20.224 [2024-11-20 13:55:31.968906] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:36:20.224 [2024-11-20 13:55:31.968920] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:36:20.224 [2024-11-20 13:55:31.968933] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:36:20.224 [2024-11-20 13:55:31.968947] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:36:20.224 [2024-11-20 13:55:31.968959] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:36:20.224 [2024-11-20 13:55:31.968971] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:36:20.224 [2024-11-20 13:55:31.968981] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:36:20.224 [2024-11-20 13:55:31.968992] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:36:20.224 [2024-11-20 13:55:31.969002] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:36:20.224 [2024-11-20 13:55:31.969012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.224 [2024-11-20 13:55:31.969026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:36:20.224 [2024-11-20 13:55:31.969036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.280 ms 00:36:20.224 [2024-11-20 13:55:31.969047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.224 [2024-11-20 13:55:31.969120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.224 [2024-11-20 13:55:31.969131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:36:20.224 [2024-11-20 13:55:31.969141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:36:20.224 [2024-11-20 13:55:31.969152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.224 [2024-11-20 13:55:31.969244] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:36:20.224 [2024-11-20 13:55:31.969257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:36:20.224 [2024-11-20 13:55:31.969272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:20.224 [2024-11-20 13:55:31.969283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:20.224 [2024-11-20 13:55:31.969294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:36:20.224 [2024-11-20 13:55:31.969304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:36:20.224 [2024-11-20 13:55:31.969313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:36:20.224 [2024-11-20 13:55:31.969324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:36:20.224 [2024-11-20 13:55:31.969335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:36:20.224 [2024-11-20 13:55:31.969345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:20.224 [2024-11-20 13:55:31.969356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:36:20.224 [2024-11-20 13:55:31.969366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:36:20.224 [2024-11-20 13:55:31.969376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:20.224 [2024-11-20 13:55:31.969386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:36:20.224 [2024-11-20 13:55:31.969395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:36:20.224 [2024-11-20 13:55:31.969404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:20.224 [2024-11-20 13:55:31.969414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:36:20.224 [2024-11-20 13:55:31.969424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:36:20.224 [2024-11-20 13:55:31.969433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:20.224 [2024-11-20 13:55:31.969443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:36:20.224 [2024-11-20 13:55:31.969452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:36:20.224 [2024-11-20 13:55:31.969462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:20.224 [2024-11-20 13:55:31.969471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:36:20.224 [2024-11-20 13:55:31.969492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:36:20.224 [2024-11-20 13:55:31.969502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:20.224 [2024-11-20 13:55:31.969511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:36:20.224 [2024-11-20 13:55:31.969521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:36:20.224 [2024-11-20 13:55:31.969531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:20.224 [2024-11-20 13:55:31.969541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:36:20.224 [2024-11-20 13:55:31.969550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:36:20.224 [2024-11-20 13:55:31.969559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:20.224 [2024-11-20 13:55:31.969569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:36:20.224 [2024-11-20 13:55:31.969579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:36:20.224 [2024-11-20 13:55:31.969589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:20.224 [2024-11-20 13:55:31.969609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:36:20.224 [2024-11-20 13:55:31.969621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:36:20.224 [2024-11-20 13:55:31.969631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:20.224 [2024-11-20 13:55:31.969640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:36:20.224 [2024-11-20 13:55:31.969650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:36:20.224 [2024-11-20 13:55:31.969660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:20.224 [2024-11-20 13:55:31.969669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:36:20.224 [2024-11-20 13:55:31.969679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:36:20.224 [2024-11-20 13:55:31.969689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:20.224 [2024-11-20 13:55:31.969699] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:36:20.224 [2024-11-20 13:55:31.969710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:36:20.224 [2024-11-20 13:55:31.969720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:20.224 [2024-11-20 13:55:31.969730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:20.224 [2024-11-20 13:55:31.969741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:36:20.224 [2024-11-20 13:55:31.969751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:36:20.224 [2024-11-20 13:55:31.969761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:36:20.224 [2024-11-20 13:55:31.969771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:36:20.224 [2024-11-20 13:55:31.969780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:36:20.224 [2024-11-20 13:55:31.969790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:36:20.224 [2024-11-20 13:55:31.969802] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:36:20.224 [2024-11-20 13:55:31.969814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:20.224 [2024-11-20 13:55:31.969826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:36:20.224 [2024-11-20 13:55:31.969837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:36:20.224 [2024-11-20 13:55:31.969848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:36:20.224 [2024-11-20 13:55:31.969859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:36:20.224 [2024-11-20 13:55:31.969870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:36:20.224 [2024-11-20 13:55:31.969882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:36:20.224 [2024-11-20 13:55:31.969892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:36:20.224 [2024-11-20 13:55:31.969903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:36:20.224 [2024-11-20 13:55:31.969914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:36:20.224 [2024-11-20 13:55:31.969925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:36:20.224 [2024-11-20 13:55:31.969936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:36:20.224 [2024-11-20 13:55:31.969947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:36:20.224 [2024-11-20 13:55:31.969959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:36:20.224 [2024-11-20 13:55:31.969970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:36:20.224 [2024-11-20 13:55:31.969981] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:36:20.224 [2024-11-20 13:55:31.969992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:20.224 [2024-11-20 13:55:31.970008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:20.224 [2024-11-20 13:55:31.970020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:36:20.224 [2024-11-20 13:55:31.970032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:36:20.224 [2024-11-20 13:55:31.970043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:36:20.224 [2024-11-20 13:55:31.970054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.224 [2024-11-20 13:55:31.970066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:36:20.224 [2024-11-20 13:55:31.970076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.866 ms 00:36:20.224 [2024-11-20 13:55:31.970087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.224 [2024-11-20 13:55:32.008962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.224 [2024-11-20 13:55:32.009153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:20.224 [2024-11-20 13:55:32.009236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.880 ms 00:36:20.224 [2024-11-20 13:55:32.009274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.224 [2024-11-20 13:55:32.009356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.224 [2024-11-20 13:55:32.009389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:36:20.224 [2024-11-20 13:55:32.009421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:36:20.224 [2024-11-20 13:55:32.009505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.224 [2024-11-20 13:55:32.056151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.224 [2024-11-20 13:55:32.056356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:20.224 [2024-11-20 13:55:32.056439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.585 ms 00:36:20.224 [2024-11-20 13:55:32.056477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.224 [2024-11-20 13:55:32.056560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.224 [2024-11-20 13:55:32.056593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:20.224 [2024-11-20 13:55:32.056690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:36:20.224 [2024-11-20 13:55:32.056726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.224 [2024-11-20 13:55:32.056932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.224 [2024-11-20 13:55:32.057021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:20.224 [2024-11-20 13:55:32.057055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:36:20.224 [2024-11-20 13:55:32.057136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.224 [2024-11-20 13:55:32.057215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.224 [2024-11-20 13:55:32.057288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:20.224 [2024-11-20 13:55:32.057324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:36:20.224 [2024-11-20 13:55:32.057387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.224 [2024-11-20 13:55:32.078447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.224 [2024-11-20 13:55:32.078658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:20.224 [2024-11-20 13:55:32.078743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.032 ms 00:36:20.224 [2024-11-20 13:55:32.078788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.224 [2024-11-20 13:55:32.078956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.224 [2024-11-20 13:55:32.079058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:36:20.224 [2024-11-20 13:55:32.079097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:36:20.224 [2024-11-20 13:55:32.079128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.224 [2024-11-20 13:55:32.117291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.224 [2024-11-20 13:55:32.117476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:36:20.224 [2024-11-20 13:55:32.117551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.118 ms 00:36:20.224 [2024-11-20 13:55:32.117589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.224 [2024-11-20 13:55:32.132691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.224 [2024-11-20 13:55:32.132827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:36:20.224 [2024-11-20 13:55:32.132917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.742 ms 00:36:20.224 [2024-11-20 13:55:32.132953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.483 [2024-11-20 13:55:32.217534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.483 [2024-11-20 13:55:32.217794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:36:20.483 [2024-11-20 13:55:32.217902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 84.619 ms 00:36:20.483 [2024-11-20 13:55:32.217942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.483 [2024-11-20 13:55:32.218153] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:36:20.483 [2024-11-20 13:55:32.218417] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:36:20.483 [2024-11-20 13:55:32.218594] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:36:20.483 [2024-11-20 13:55:32.218772] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:36:20.483 [2024-11-20 13:55:32.218885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.484 [2024-11-20 13:55:32.218919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:36:20.484 [2024-11-20 13:55:32.218950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.862 ms 00:36:20.484 [2024-11-20 13:55:32.219013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.484 [2024-11-20 13:55:32.219138] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:36:20.484 [2024-11-20 13:55:32.219242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.484 [2024-11-20 13:55:32.219282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:36:20.484 [2024-11-20 13:55:32.219313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.104 ms 00:36:20.484 [2024-11-20 13:55:32.219344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.484 [2024-11-20 13:55:32.242321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.484 [2024-11-20 13:55:32.242534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:36:20.484 [2024-11-20 13:55:32.242562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.895 ms 00:36:20.484 [2024-11-20 13:55:32.242574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.484 [2024-11-20 13:55:32.257151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.484 [2024-11-20 13:55:32.257373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:36:20.484 [2024-11-20 13:55:32.257399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:36:20.484 [2024-11-20 13:55:32.257411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:20.484 [2024-11-20 13:55:32.257576] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:36:20.484 [2024-11-20 13:55:32.257795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:20.484 [2024-11-20 13:55:32.257807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:36:20.484 [2024-11-20 13:55:32.257819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.222 ms 00:36:20.484 [2024-11-20 13:55:32.257831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.051 [2024-11-20 13:55:32.817370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.051 [2024-11-20 13:55:32.817453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:36:21.051 [2024-11-20 13:55:32.817474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 559.013 ms 00:36:21.051 [2024-11-20 13:55:32.817486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.051 [2024-11-20 13:55:32.823038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.051 [2024-11-20 13:55:32.823082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:36:21.051 [2024-11-20 13:55:32.823096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.943 ms 00:36:21.051 [2024-11-20 13:55:32.823108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.051 [2024-11-20 13:55:32.823476] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:36:21.051 [2024-11-20 13:55:32.823502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.051 [2024-11-20 13:55:32.823514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:36:21.051 [2024-11-20 13:55:32.823526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.355 ms 00:36:21.051 [2024-11-20 13:55:32.823537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.051 [2024-11-20 13:55:32.823568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.051 [2024-11-20 13:55:32.823580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:36:21.051 [2024-11-20 13:55:32.823592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:36:21.051 [2024-11-20 13:55:32.823616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.051 [2024-11-20 13:55:32.823659] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 567.005 ms, result 0 00:36:21.051 [2024-11-20 13:55:32.823704] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:36:21.051 [2024-11-20 13:55:32.823788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.051 [2024-11-20 13:55:32.823798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:36:21.051 [2024-11-20 13:55:32.823809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.086 ms 00:36:21.051 [2024-11-20 13:55:32.823819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.619 [2024-11-20 13:55:33.381403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.619 [2024-11-20 13:55:33.381469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:36:21.619 [2024-11-20 13:55:33.381487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 557.300 ms 00:36:21.619 [2024-11-20 13:55:33.381498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.619 [2024-11-20 13:55:33.387519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.619 [2024-11-20 13:55:33.387565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:36:21.619 [2024-11-20 13:55:33.387579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.310 ms 00:36:21.619 [2024-11-20 13:55:33.387590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.619 [2024-11-20 13:55:33.387985] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:36:21.620 [2024-11-20 13:55:33.388011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.620 [2024-11-20 13:55:33.388023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:36:21.620 [2024-11-20 13:55:33.388035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.379 ms 00:36:21.620 [2024-11-20 13:55:33.388046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.620 [2024-11-20 13:55:33.388186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.620 [2024-11-20 13:55:33.388201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:36:21.620 [2024-11-20 13:55:33.388212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:21.620 [2024-11-20 13:55:33.388223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.620 [2024-11-20 13:55:33.388265] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 565.475 ms, result 0 00:36:21.620 [2024-11-20 13:55:33.388311] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:36:21.620 [2024-11-20 13:55:33.388332] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:36:21.620 [2024-11-20 13:55:33.388348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.620 [2024-11-20 13:55:33.388360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:36:21.620 [2024-11-20 13:55:33.388372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1132.630 ms 00:36:21.620 [2024-11-20 13:55:33.388383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.620 [2024-11-20 13:55:33.388416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.620 [2024-11-20 13:55:33.388428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:36:21.620 [2024-11-20 13:55:33.388445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:36:21.620 [2024-11-20 13:55:33.388456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.620 [2024-11-20 13:55:33.400443] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:36:21.620 [2024-11-20 13:55:33.400593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.620 [2024-11-20 13:55:33.400620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:36:21.620 [2024-11-20 13:55:33.400633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.138 ms 00:36:21.620 [2024-11-20 13:55:33.400644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.620 [2024-11-20 13:55:33.401232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.620 [2024-11-20 13:55:33.401266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:36:21.620 [2024-11-20 13:55:33.401285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.505 ms 00:36:21.620 [2024-11-20 13:55:33.401295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.620 [2024-11-20 13:55:33.403322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.620 [2024-11-20 13:55:33.403481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:36:21.620 [2024-11-20 13:55:33.403505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.006 ms 00:36:21.620 [2024-11-20 13:55:33.403516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.620 [2024-11-20 13:55:33.403577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.620 [2024-11-20 13:55:33.403589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:36:21.620 [2024-11-20 13:55:33.403616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:36:21.620 [2024-11-20 13:55:33.403634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.620 [2024-11-20 13:55:33.403738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.620 [2024-11-20 13:55:33.403750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:36:21.620 [2024-11-20 13:55:33.403762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:36:21.620 [2024-11-20 13:55:33.403772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.620 [2024-11-20 13:55:33.403795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.620 [2024-11-20 13:55:33.403806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:36:21.620 [2024-11-20 13:55:33.403817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:36:21.620 [2024-11-20 13:55:33.403828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.620 [2024-11-20 13:55:33.403866] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:36:21.620 [2024-11-20 13:55:33.403878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.620 [2024-11-20 13:55:33.403889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:36:21.620 [2024-11-20 13:55:33.403901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:36:21.620 [2024-11-20 13:55:33.403911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.620 [2024-11-20 13:55:33.403966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:21.620 [2024-11-20 13:55:33.403978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:36:21.620 [2024-11-20 13:55:33.403989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:36:21.620 [2024-11-20 13:55:33.403999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:21.620 [2024-11-20 13:55:33.404931] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1483.078 ms, result 0 00:36:21.620 [2024-11-20 13:55:33.417260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:21.620 [2024-11-20 13:55:33.433250] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:36:21.620 [2024-11-20 13:55:33.442992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:21.620 Validate MD5 checksum, iteration 1 00:36:21.620 13:55:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:21.620 13:55:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:36:21.620 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:21.620 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:36:21.620 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:36:21.620 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:36:21.620 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:36:21.620 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:21.620 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:36:21.620 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:21.620 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:21.620 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:21.620 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:21.620 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:21.620 13:55:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:21.879 [2024-11-20 13:55:33.582712] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:36:21.879 [2024-11-20 13:55:33.582974] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84295 ] 00:36:21.879 [2024-11-20 13:55:33.762248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.138 [2024-11-20 13:55:33.885235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:24.044  [2024-11-20T13:55:36.261Z] Copying: 683/1024 [MB] (683 MBps) [2024-11-20T13:55:37.734Z] Copying: 1024/1024 [MB] (average 674 MBps) 00:36:25.777 00:36:25.777 13:55:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:36:25.777 13:55:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:27.679 13:55:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:27.679 Validate MD5 checksum, iteration 2 00:36:27.679 13:55:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=682a7c4557960e23a7e1054c4957b380 00:36:27.679 13:55:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 682a7c4557960e23a7e1054c4957b380 != \6\8\2\a\7\c\4\5\5\7\9\6\0\e\2\3\a\7\e\1\0\5\4\c\4\9\5\7\b\3\8\0 ]] 00:36:27.679 13:55:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:27.679 13:55:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:27.679 13:55:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:36:27.679 13:55:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:27.679 13:55:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:27.679 13:55:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:27.679 13:55:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:27.679 13:55:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:27.679 13:55:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:27.679 [2024-11-20 13:55:39.432876] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:36:27.679 [2024-11-20 13:55:39.433179] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84358 ] 00:36:27.679 [2024-11-20 13:55:39.613115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.938 [2024-11-20 13:55:39.724895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:29.846  [2024-11-20T13:55:42.062Z] Copying: 673/1024 [MB] (673 MBps) [2024-11-20T13:55:43.439Z] Copying: 1024/1024 [MB] (average 664 MBps) 00:36:31.482 00:36:31.482 13:55:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:36:31.482 13:55:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:33.382 13:55:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:33.382 13:55:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=dbfdfc1c02acec90456ae5547df2be1e 00:36:33.382 13:55:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ dbfdfc1c02acec90456ae5547df2be1e != \d\b\f\d\f\c\1\c\0\2\a\c\e\c\9\0\4\5\6\a\e\5\5\4\7\d\f\2\b\e\1\e ]] 00:36:33.382 13:55:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:33.382 13:55:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:33.382 13:55:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:36:33.382 13:55:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:36:33.382 13:55:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:36:33.382 13:55:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:33.382 13:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:36:33.382 13:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:36:33.382 13:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:36:33.382 13:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:36:33.382 13:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84260 ]] 00:36:33.382 13:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84260 00:36:33.382 13:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84260 ']' 00:36:33.383 13:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84260 00:36:33.383 13:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:36:33.383 13:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:33.383 13:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84260 00:36:33.383 killing process with pid 84260 00:36:33.383 13:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:33.383 13:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:33.383 13:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84260' 00:36:33.383 13:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84260 00:36:33.383 13:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84260 00:36:34.762 [2024-11-20 13:55:46.303893] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:36:34.762 [2024-11-20 13:55:46.323062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.762 [2024-11-20 13:55:46.323110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:36:34.762 [2024-11-20 13:55:46.323127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:36:34.762 [2024-11-20 13:55:46.323138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.762 [2024-11-20 13:55:46.323162] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:36:34.762 [2024-11-20 13:55:46.327270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.762 [2024-11-20 13:55:46.327301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:36:34.762 [2024-11-20 13:55:46.327321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.096 ms 00:36:34.762 [2024-11-20 13:55:46.327332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.762 [2024-11-20 13:55:46.327538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.762 [2024-11-20 13:55:46.327551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:36:34.762 [2024-11-20 13:55:46.327563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.179 ms 00:36:34.762 [2024-11-20 13:55:46.327574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.762 [2024-11-20 13:55:46.328622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.762 [2024-11-20 13:55:46.328655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:36:34.762 [2024-11-20 13:55:46.328668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.033 ms 00:36:34.762 [2024-11-20 13:55:46.328678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.762 [2024-11-20 13:55:46.329619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.762 [2024-11-20 13:55:46.329644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:36:34.762 [2024-11-20 13:55:46.329656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.902 ms 00:36:34.762 [2024-11-20 13:55:46.329679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.762 [2024-11-20 13:55:46.344322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.762 [2024-11-20 13:55:46.344375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:36:34.762 [2024-11-20 13:55:46.344398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.615 ms 00:36:34.762 [2024-11-20 13:55:46.344423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.762 [2024-11-20 13:55:46.351992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.762 [2024-11-20 13:55:46.352046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:36:34.762 [2024-11-20 13:55:46.352071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.523 ms 00:36:34.762 [2024-11-20 13:55:46.352090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.762 [2024-11-20 13:55:46.352200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.762 [2024-11-20 13:55:46.352223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:36:34.762 [2024-11-20 13:55:46.352244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:36:34.762 [2024-11-20 13:55:46.352262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.762 [2024-11-20 13:55:46.367476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.762 [2024-11-20 13:55:46.367654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:36:34.762 [2024-11-20 13:55:46.367679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.202 ms 00:36:34.762 [2024-11-20 13:55:46.367707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.762 [2024-11-20 13:55:46.383706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.762 [2024-11-20 13:55:46.383742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:36:34.762 [2024-11-20 13:55:46.383756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.964 ms 00:36:34.762 [2024-11-20 13:55:46.383767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.762 [2024-11-20 13:55:46.397727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.762 [2024-11-20 13:55:46.397760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:36:34.762 [2024-11-20 13:55:46.397774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.943 ms 00:36:34.762 [2024-11-20 13:55:46.397784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.762 [2024-11-20 13:55:46.412247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.762 [2024-11-20 13:55:46.412278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:36:34.762 [2024-11-20 13:55:46.412291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.421 ms 00:36:34.762 [2024-11-20 13:55:46.412302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.762 [2024-11-20 13:55:46.412341] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:36:34.762 [2024-11-20 13:55:46.412359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:36:34.762 [2024-11-20 13:55:46.412373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:36:34.762 [2024-11-20 13:55:46.412385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:36:34.762 [2024-11-20 13:55:46.412397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:34.762 [2024-11-20 13:55:46.412409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:34.762 [2024-11-20 13:55:46.412420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:34.762 [2024-11-20 13:55:46.412432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:34.762 [2024-11-20 13:55:46.412443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:34.762 [2024-11-20 13:55:46.412454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:34.762 [2024-11-20 13:55:46.412465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:34.762 [2024-11-20 13:55:46.412475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:34.762 [2024-11-20 13:55:46.412487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:34.762 [2024-11-20 13:55:46.412498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:34.762 [2024-11-20 13:55:46.412508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:34.762 [2024-11-20 13:55:46.412519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:34.762 [2024-11-20 13:55:46.412530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:34.762 [2024-11-20 13:55:46.412541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:34.762 [2024-11-20 13:55:46.412551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:34.763 [2024-11-20 13:55:46.412565] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:36:34.763 [2024-11-20 13:55:46.412576] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 70390623-5db4-4a53-b61f-90066da92760 00:36:34.763 [2024-11-20 13:55:46.412588] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:36:34.763 [2024-11-20 13:55:46.412607] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:36:34.763 [2024-11-20 13:55:46.412617] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:36:34.763 [2024-11-20 13:55:46.412628] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:36:34.763 [2024-11-20 13:55:46.412639] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:36:34.763 [2024-11-20 13:55:46.412650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:36:34.763 [2024-11-20 13:55:46.412661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:36:34.763 [2024-11-20 13:55:46.412671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:36:34.763 [2024-11-20 13:55:46.412681] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:36:34.763 [2024-11-20 13:55:46.412693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.763 [2024-11-20 13:55:46.412709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:36:34.763 [2024-11-20 13:55:46.412721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.354 ms 00:36:34.763 [2024-11-20 13:55:46.412732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.763 [2024-11-20 13:55:46.433672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.763 [2024-11-20 13:55:46.433723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:36:34.763 [2024-11-20 13:55:46.433738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.942 ms 00:36:34.763 [2024-11-20 13:55:46.433750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.763 [2024-11-20 13:55:46.434294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:34.763 [2024-11-20 13:55:46.434316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:36:34.763 [2024-11-20 13:55:46.434328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.518 ms 00:36:34.763 [2024-11-20 13:55:46.434339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.763 [2024-11-20 13:55:46.500739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:34.763 [2024-11-20 13:55:46.500916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:34.763 [2024-11-20 13:55:46.500938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:34.763 [2024-11-20 13:55:46.500951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.763 [2024-11-20 13:55:46.500998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:34.763 [2024-11-20 13:55:46.501009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:34.763 [2024-11-20 13:55:46.501020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:34.763 [2024-11-20 13:55:46.501031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.763 [2024-11-20 13:55:46.501120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:34.763 [2024-11-20 13:55:46.501134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:34.763 [2024-11-20 13:55:46.501145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:34.763 [2024-11-20 13:55:46.501156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.763 [2024-11-20 13:55:46.501175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:34.763 [2024-11-20 13:55:46.501192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:34.763 [2024-11-20 13:55:46.501203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:34.763 [2024-11-20 13:55:46.501213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:34.763 [2024-11-20 13:55:46.624298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:34.763 [2024-11-20 13:55:46.624546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:34.763 [2024-11-20 13:55:46.624573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:34.763 [2024-11-20 13:55:46.624585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.022 [2024-11-20 13:55:46.725548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:35.023 [2024-11-20 13:55:46.725632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:35.023 [2024-11-20 13:55:46.725651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:35.023 [2024-11-20 13:55:46.725662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.023 [2024-11-20 13:55:46.725789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:35.023 [2024-11-20 13:55:46.725802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:35.023 [2024-11-20 13:55:46.725813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:35.023 [2024-11-20 13:55:46.725824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.023 [2024-11-20 13:55:46.725878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:35.023 [2024-11-20 13:55:46.725891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:35.023 [2024-11-20 13:55:46.725909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:35.023 [2024-11-20 13:55:46.725932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.023 [2024-11-20 13:55:46.726045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:35.023 [2024-11-20 13:55:46.726059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:35.023 [2024-11-20 13:55:46.726071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:35.023 [2024-11-20 13:55:46.726082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.023 [2024-11-20 13:55:46.726129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:35.023 [2024-11-20 13:55:46.726141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:36:35.023 [2024-11-20 13:55:46.726152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:35.023 [2024-11-20 13:55:46.726167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.023 [2024-11-20 13:55:46.726207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:35.023 [2024-11-20 13:55:46.726230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:35.023 [2024-11-20 13:55:46.726242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:35.023 [2024-11-20 13:55:46.726252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.023 [2024-11-20 13:55:46.726299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:35.023 [2024-11-20 13:55:46.726312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:35.023 [2024-11-20 13:55:46.726327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:35.023 [2024-11-20 13:55:46.726337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:35.023 [2024-11-20 13:55:46.726460] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 404.017 ms, result 0 00:36:36.402 13:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:36:36.402 13:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:36.402 13:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:36:36.402 13:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:36:36.402 13:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:36:36.402 13:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:36.402 Remove shared memory files 00:36:36.402 13:55:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:36:36.402 13:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:36.402 13:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:36:36.402 13:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:36:36.402 13:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84024 00:36:36.402 13:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:36.402 13:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:36:36.402 ************************************ 00:36:36.402 END TEST ftl_upgrade_shutdown 00:36:36.402 ************************************ 00:36:36.402 00:36:36.402 real 1m27.655s 00:36:36.402 user 2m1.423s 00:36:36.402 sys 0m21.660s 00:36:36.402 13:55:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:36.402 13:55:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:36.402 13:55:48 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:36:36.402 13:55:48 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:36:36.402 13:55:48 ftl -- ftl/ftl.sh@14 -- # killprocess 76987 00:36:36.402 13:55:48 ftl -- common/autotest_common.sh@954 -- # '[' -z 76987 ']' 00:36:36.402 13:55:48 ftl -- common/autotest_common.sh@958 -- # kill -0 76987 00:36:36.402 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76987) - No such process 00:36:36.402 Process with pid 76987 is not found 00:36:36.402 13:55:48 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76987 is not found' 00:36:36.402 13:55:48 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:36:36.402 13:55:48 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84486 00:36:36.402 13:55:48 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:36.402 13:55:48 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84486 00:36:36.402 13:55:48 ftl -- common/autotest_common.sh@835 -- # '[' -z 84486 ']' 00:36:36.402 13:55:48 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:36.402 13:55:48 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:36.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:36.402 13:55:48 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:36.402 13:55:48 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:36.402 13:55:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:36.402 [2024-11-20 13:55:48.203816] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:36:36.402 [2024-11-20 13:55:48.203952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84486 ] 00:36:36.661 [2024-11-20 13:55:48.385469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:36.661 [2024-11-20 13:55:48.501817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:37.595 13:55:49 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:37.595 13:55:49 ftl -- common/autotest_common.sh@868 -- # return 0 00:36:37.595 13:55:49 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:36:37.854 nvme0n1 00:36:37.854 13:55:49 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:36:37.854 13:55:49 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:37.854 13:55:49 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:38.113 13:55:49 ftl -- ftl/common.sh@28 -- # stores=6b34bd00-d88d-4ee3-804f-136c1cfa5fcb 00:36:38.113 13:55:49 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:36:38.113 13:55:49 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6b34bd00-d88d-4ee3-804f-136c1cfa5fcb 00:36:38.372 13:55:50 ftl -- ftl/ftl.sh@23 -- # killprocess 84486 00:36:38.372 13:55:50 ftl -- common/autotest_common.sh@954 -- # '[' -z 84486 ']' 00:36:38.372 13:55:50 ftl -- common/autotest_common.sh@958 -- # kill -0 84486 00:36:38.372 13:55:50 ftl -- common/autotest_common.sh@959 -- # uname 00:36:38.372 13:55:50 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:38.372 13:55:50 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84486 00:36:38.372 killing process with pid 84486 00:36:38.372 13:55:50 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:38.372 13:55:50 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:38.372 13:55:50 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84486' 00:36:38.372 13:55:50 ftl -- common/autotest_common.sh@973 -- # kill 84486 00:36:38.372 13:55:50 ftl -- common/autotest_common.sh@978 -- # wait 84486 00:36:40.907 13:55:52 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:40.907 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:41.166 Waiting for block devices as requested 00:36:41.166 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:41.166 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:41.437 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:41.437 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:46.713 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:46.713 13:55:58 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:46.713 Remove shared memory files 00:36:46.713 13:55:58 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:46.713 13:55:58 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:46.713 13:55:58 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:46.713 13:55:58 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:46.713 13:55:58 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:46.713 13:55:58 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:46.713 ************************************ 00:36:46.713 END TEST ftl 00:36:46.713 ************************************ 00:36:46.713 00:36:46.713 real 11m9.182s 00:36:46.713 user 13m56.839s 00:36:46.713 sys 1m30.761s 00:36:46.713 13:55:58 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:46.713 13:55:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:46.713 13:55:58 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:46.713 13:55:58 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:46.713 13:55:58 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:46.713 13:55:58 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:46.713 13:55:58 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:46.713 13:55:58 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:46.713 13:55:58 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:46.713 13:55:58 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:46.713 13:55:58 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:46.713 13:55:58 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:46.713 13:55:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:46.713 13:55:58 -- common/autotest_common.sh@10 -- # set +x 00:36:46.713 13:55:58 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:46.713 13:55:58 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:46.713 13:55:58 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:46.713 13:55:58 -- common/autotest_common.sh@10 -- # set +x 00:36:49.263 INFO: APP EXITING 00:36:49.263 INFO: killing all VMs 00:36:49.263 INFO: killing vhost app 00:36:49.263 INFO: EXIT DONE 00:36:49.263 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:49.845 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:49.846 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:49.846 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:49.846 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:50.414 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:50.673 Cleaning 00:36:50.673 Removing: /var/run/dpdk/spdk0/config 00:36:50.673 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:50.673 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:50.673 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:50.673 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:50.673 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:50.673 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:50.932 Removing: /var/run/dpdk/spdk0 00:36:50.932 Removing: /var/run/dpdk/spdk_pid57720 00:36:50.932 Removing: /var/run/dpdk/spdk_pid57955 00:36:50.932 Removing: /var/run/dpdk/spdk_pid58195 00:36:50.932 Removing: /var/run/dpdk/spdk_pid58299 00:36:50.932 Removing: /var/run/dpdk/spdk_pid58355 00:36:50.932 Removing: /var/run/dpdk/spdk_pid58483 00:36:50.932 Removing: /var/run/dpdk/spdk_pid58512 00:36:50.932 Removing: /var/run/dpdk/spdk_pid58722 00:36:50.932 Removing: /var/run/dpdk/spdk_pid58828 00:36:50.932 Removing: /var/run/dpdk/spdk_pid58937 00:36:50.932 Removing: /var/run/dpdk/spdk_pid59063 00:36:50.932 Removing: /var/run/dpdk/spdk_pid59171 00:36:50.932 Removing: /var/run/dpdk/spdk_pid59210 00:36:50.932 Removing: /var/run/dpdk/spdk_pid59247 00:36:50.932 Removing: /var/run/dpdk/spdk_pid59323 00:36:50.932 Removing: /var/run/dpdk/spdk_pid59451 00:36:50.932 Removing: /var/run/dpdk/spdk_pid59911 00:36:50.932 Removing: /var/run/dpdk/spdk_pid59991 00:36:50.932 Removing: /var/run/dpdk/spdk_pid60065 00:36:50.932 Removing: /var/run/dpdk/spdk_pid60087 00:36:50.932 Removing: /var/run/dpdk/spdk_pid60241 00:36:50.932 Removing: /var/run/dpdk/spdk_pid60262 00:36:50.932 Removing: /var/run/dpdk/spdk_pid60416 00:36:50.932 Removing: /var/run/dpdk/spdk_pid60437 00:36:50.932 Removing: /var/run/dpdk/spdk_pid60507 00:36:50.932 Removing: /var/run/dpdk/spdk_pid60529 00:36:50.932 Removing: /var/run/dpdk/spdk_pid60594 00:36:50.932 Removing: /var/run/dpdk/spdk_pid60618 00:36:50.932 Removing: /var/run/dpdk/spdk_pid60818 00:36:50.932 Removing: /var/run/dpdk/spdk_pid60855 00:36:50.932 Removing: /var/run/dpdk/spdk_pid60944 00:36:50.932 Removing: /var/run/dpdk/spdk_pid61137 00:36:50.932 Removing: /var/run/dpdk/spdk_pid61233 00:36:50.932 Removing: /var/run/dpdk/spdk_pid61281 00:36:50.932 Removing: /var/run/dpdk/spdk_pid61737 00:36:50.932 Removing: /var/run/dpdk/spdk_pid61845 00:36:50.932 Removing: /var/run/dpdk/spdk_pid61961 00:36:50.932 Removing: /var/run/dpdk/spdk_pid62014 00:36:50.932 Removing: /var/run/dpdk/spdk_pid62045 00:36:50.932 Removing: /var/run/dpdk/spdk_pid62129 00:36:50.932 Removing: /var/run/dpdk/spdk_pid62776 00:36:50.932 Removing: /var/run/dpdk/spdk_pid62818 00:36:50.932 Removing: /var/run/dpdk/spdk_pid63317 00:36:50.932 Removing: /var/run/dpdk/spdk_pid63421 00:36:50.932 Removing: /var/run/dpdk/spdk_pid63538 00:36:50.932 Removing: /var/run/dpdk/spdk_pid63597 00:36:50.932 Removing: /var/run/dpdk/spdk_pid63619 00:36:50.932 Removing: /var/run/dpdk/spdk_pid63648 00:36:50.932 Removing: /var/run/dpdk/spdk_pid65541 00:36:50.932 Removing: /var/run/dpdk/spdk_pid65695 00:36:50.932 Removing: /var/run/dpdk/spdk_pid65703 00:36:50.932 Removing: /var/run/dpdk/spdk_pid65716 00:36:50.932 Removing: /var/run/dpdk/spdk_pid65763 00:36:50.932 Removing: /var/run/dpdk/spdk_pid65767 00:36:50.932 Removing: /var/run/dpdk/spdk_pid65779 00:36:50.932 Removing: /var/run/dpdk/spdk_pid65824 00:36:50.932 Removing: /var/run/dpdk/spdk_pid65828 00:36:51.192 Removing: /var/run/dpdk/spdk_pid65840 00:36:51.192 Removing: /var/run/dpdk/spdk_pid65891 00:36:51.192 Removing: /var/run/dpdk/spdk_pid65895 00:36:51.192 Removing: /var/run/dpdk/spdk_pid65907 00:36:51.192 Removing: /var/run/dpdk/spdk_pid67331 00:36:51.192 Removing: /var/run/dpdk/spdk_pid67444 00:36:51.192 Removing: /var/run/dpdk/spdk_pid68878 00:36:51.192 Removing: /var/run/dpdk/spdk_pid70619 00:36:51.192 Removing: /var/run/dpdk/spdk_pid70704 00:36:51.192 Removing: /var/run/dpdk/spdk_pid70785 00:36:51.192 Removing: /var/run/dpdk/spdk_pid70895 00:36:51.192 Removing: /var/run/dpdk/spdk_pid70996 00:36:51.192 Removing: /var/run/dpdk/spdk_pid71097 00:36:51.192 Removing: /var/run/dpdk/spdk_pid71177 00:36:51.192 Removing: /var/run/dpdk/spdk_pid71263 00:36:51.192 Removing: /var/run/dpdk/spdk_pid71373 00:36:51.192 Removing: /var/run/dpdk/spdk_pid71470 00:36:51.192 Removing: /var/run/dpdk/spdk_pid71575 00:36:51.192 Removing: /var/run/dpdk/spdk_pid71660 00:36:51.192 Removing: /var/run/dpdk/spdk_pid71735 00:36:51.192 Removing: /var/run/dpdk/spdk_pid71845 00:36:51.192 Removing: /var/run/dpdk/spdk_pid71949 00:36:51.192 Removing: /var/run/dpdk/spdk_pid72045 00:36:51.192 Removing: /var/run/dpdk/spdk_pid72130 00:36:51.192 Removing: /var/run/dpdk/spdk_pid72211 00:36:51.192 Removing: /var/run/dpdk/spdk_pid72329 00:36:51.192 Removing: /var/run/dpdk/spdk_pid72423 00:36:51.192 Removing: /var/run/dpdk/spdk_pid72525 00:36:51.192 Removing: /var/run/dpdk/spdk_pid72609 00:36:51.192 Removing: /var/run/dpdk/spdk_pid72693 00:36:51.192 Removing: /var/run/dpdk/spdk_pid72774 00:36:51.192 Removing: /var/run/dpdk/spdk_pid72848 00:36:51.192 Removing: /var/run/dpdk/spdk_pid72957 00:36:51.192 Removing: /var/run/dpdk/spdk_pid73053 00:36:51.192 Removing: /var/run/dpdk/spdk_pid73159 00:36:51.192 Removing: /var/run/dpdk/spdk_pid73241 00:36:51.192 Removing: /var/run/dpdk/spdk_pid73317 00:36:51.192 Removing: /var/run/dpdk/spdk_pid73397 00:36:51.192 Removing: /var/run/dpdk/spdk_pid73474 00:36:51.192 Removing: /var/run/dpdk/spdk_pid73587 00:36:51.192 Removing: /var/run/dpdk/spdk_pid73678 00:36:51.192 Removing: /var/run/dpdk/spdk_pid73827 00:36:51.192 Removing: /var/run/dpdk/spdk_pid74128 00:36:51.192 Removing: /var/run/dpdk/spdk_pid74166 00:36:51.192 Removing: /var/run/dpdk/spdk_pid74633 00:36:51.192 Removing: /var/run/dpdk/spdk_pid74825 00:36:51.192 Removing: /var/run/dpdk/spdk_pid74929 00:36:51.192 Removing: /var/run/dpdk/spdk_pid75039 00:36:51.192 Removing: /var/run/dpdk/spdk_pid75098 00:36:51.192 Removing: /var/run/dpdk/spdk_pid75125 00:36:51.192 Removing: /var/run/dpdk/spdk_pid75430 00:36:51.192 Removing: /var/run/dpdk/spdk_pid75506 00:36:51.192 Removing: /var/run/dpdk/spdk_pid75603 00:36:51.192 Removing: /var/run/dpdk/spdk_pid76035 00:36:51.192 Removing: /var/run/dpdk/spdk_pid76181 00:36:51.192 Removing: /var/run/dpdk/spdk_pid76987 00:36:51.192 Removing: /var/run/dpdk/spdk_pid77137 00:36:51.192 Removing: /var/run/dpdk/spdk_pid77347 00:36:51.192 Removing: /var/run/dpdk/spdk_pid77455 00:36:51.452 Removing: /var/run/dpdk/spdk_pid77842 00:36:51.452 Removing: /var/run/dpdk/spdk_pid78106 00:36:51.452 Removing: /var/run/dpdk/spdk_pid78471 00:36:51.452 Removing: /var/run/dpdk/spdk_pid78688 00:36:51.452 Removing: /var/run/dpdk/spdk_pid78827 00:36:51.452 Removing: /var/run/dpdk/spdk_pid78898 00:36:51.452 Removing: /var/run/dpdk/spdk_pid79041 00:36:51.452 Removing: /var/run/dpdk/spdk_pid79076 00:36:51.452 Removing: /var/run/dpdk/spdk_pid79141 00:36:51.452 Removing: /var/run/dpdk/spdk_pid79351 00:36:51.452 Removing: /var/run/dpdk/spdk_pid79595 00:36:51.452 Removing: /var/run/dpdk/spdk_pid79960 00:36:51.452 Removing: /var/run/dpdk/spdk_pid80363 00:36:51.452 Removing: /var/run/dpdk/spdk_pid80776 00:36:51.452 Removing: /var/run/dpdk/spdk_pid81256 00:36:51.452 Removing: /var/run/dpdk/spdk_pid81409 00:36:51.452 Removing: /var/run/dpdk/spdk_pid81502 00:36:51.452 Removing: /var/run/dpdk/spdk_pid82096 00:36:51.452 Removing: /var/run/dpdk/spdk_pid82171 00:36:51.452 Removing: /var/run/dpdk/spdk_pid82592 00:36:51.452 Removing: /var/run/dpdk/spdk_pid82952 00:36:51.452 Removing: /var/run/dpdk/spdk_pid83453 00:36:51.452 Removing: /var/run/dpdk/spdk_pid83581 00:36:51.452 Removing: /var/run/dpdk/spdk_pid83635 00:36:51.452 Removing: /var/run/dpdk/spdk_pid83699 00:36:51.452 Removing: /var/run/dpdk/spdk_pid83755 00:36:51.452 Removing: /var/run/dpdk/spdk_pid83823 00:36:51.452 Removing: /var/run/dpdk/spdk_pid84024 00:36:51.452 Removing: /var/run/dpdk/spdk_pid84095 00:36:51.452 Removing: /var/run/dpdk/spdk_pid84161 00:36:51.452 Removing: /var/run/dpdk/spdk_pid84260 00:36:51.452 Removing: /var/run/dpdk/spdk_pid84295 00:36:51.452 Removing: /var/run/dpdk/spdk_pid84358 00:36:51.452 Removing: /var/run/dpdk/spdk_pid84486 00:36:51.452 Clean 00:36:51.452 13:56:03 -- common/autotest_common.sh@1453 -- # return 0 00:36:51.452 13:56:03 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:51.452 13:56:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:51.452 13:56:03 -- common/autotest_common.sh@10 -- # set +x 00:36:51.712 13:56:03 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:51.712 13:56:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:51.712 13:56:03 -- common/autotest_common.sh@10 -- # set +x 00:36:51.712 13:56:03 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:51.712 13:56:03 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:51.712 13:56:03 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:51.712 13:56:03 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:51.712 13:56:03 -- spdk/autotest.sh@398 -- # hostname 00:36:51.712 13:56:03 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:51.972 geninfo: WARNING: invalid characters removed from testname! 00:37:18.537 13:56:28 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:19.914 13:56:31 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:22.449 13:56:33 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:24.355 13:56:35 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:26.261 13:56:38 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:28.796 13:56:40 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:30.703 13:56:42 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:30.703 13:56:42 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:30.703 13:56:42 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:37:30.703 13:56:42 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:30.703 13:56:42 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:30.703 13:56:42 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:30.703 + [[ -n 5244 ]] 00:37:30.703 + sudo kill 5244 00:37:30.712 [Pipeline] } 00:37:30.727 [Pipeline] // timeout 00:37:30.731 [Pipeline] } 00:37:30.743 [Pipeline] // stage 00:37:30.748 [Pipeline] } 00:37:30.759 [Pipeline] // catchError 00:37:30.769 [Pipeline] stage 00:37:30.771 [Pipeline] { (Stop VM) 00:37:30.784 [Pipeline] sh 00:37:31.064 + vagrant halt 00:37:33.663 ==> default: Halting domain... 00:37:40.248 [Pipeline] sh 00:37:40.533 + vagrant destroy -f 00:37:43.821 ==> default: Removing domain... 00:37:43.835 [Pipeline] sh 00:37:44.121 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:37:44.130 [Pipeline] } 00:37:44.147 [Pipeline] // stage 00:37:44.152 [Pipeline] } 00:37:44.169 [Pipeline] // dir 00:37:44.175 [Pipeline] } 00:37:44.190 [Pipeline] // wrap 00:37:44.198 [Pipeline] } 00:37:44.212 [Pipeline] // catchError 00:37:44.224 [Pipeline] stage 00:37:44.226 [Pipeline] { (Epilogue) 00:37:44.241 [Pipeline] sh 00:37:44.525 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:49.813 [Pipeline] catchError 00:37:49.815 [Pipeline] { 00:37:49.830 [Pipeline] sh 00:37:50.113 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:50.113 Artifacts sizes are good 00:37:50.123 [Pipeline] } 00:37:50.139 [Pipeline] // catchError 00:37:50.152 [Pipeline] archiveArtifacts 00:37:50.162 Archiving artifacts 00:37:50.303 [Pipeline] cleanWs 00:37:50.318 [WS-CLEANUP] Deleting project workspace... 00:37:50.318 [WS-CLEANUP] Deferred wipeout is used... 00:37:50.344 [WS-CLEANUP] done 00:37:50.346 [Pipeline] } 00:37:50.362 [Pipeline] // stage 00:37:50.367 [Pipeline] } 00:37:50.378 [Pipeline] // node 00:37:50.383 [Pipeline] End of Pipeline 00:37:50.407 Finished: SUCCESS